Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ software. Both of them are included in a single file, which can be
|
|
37 |
downloaded and run as follows:
|
38 |
|
39 |
```
|
40 |
-
wget https://huggingface.co/
|
41 |
chmod +x gemma-2-2b-it.Q6_K.llamafile
|
42 |
./gemma-2-2b-it.Q6_K.llamafile
|
43 |
```
|
@@ -157,7 +157,7 @@ AMD64.
|
|
157 |
|
158 |
This model works well with any quantization format. Q6\_K is the best
|
159 |
choice overall here. We tested that, with [our 27b Gemma2
|
160 |
-
llamafiles](https://huggingface.co/
|
161 |
that the llamafile implementation of Gemma2 is able to to produce
|
162 |
identical responses to the Gemma2 model that's hosted by Google on
|
163 |
aistudio.google.com. Therefore we'd assume these 2b llamafiles are also
|
@@ -169,8 +169,8 @@ try using the BF16 weights, which have the original fidelity.
|
|
169 |
There are higher quality versions of this model available as llamafiles,
|
170 |
which require more memory.
|
171 |
|
172 |
-
- <https://huggingface.co/
|
173 |
-
- <https://huggingface.co/
|
174 |
|
175 |
The 9B and 27B models were released a month earlier than 2B, so they're
|
176 |
packaged with an slightly older version of the llamafile software.
|
|
|
37 |
downloaded and run as follows:
|
38 |
|
39 |
```
|
40 |
+
wget https://huggingface.co/Mozilla/gemma-2-2b-it-llamafile/resolve/main/gemma-2-2b-it.Q6_K.llamafile
|
41 |
chmod +x gemma-2-2b-it.Q6_K.llamafile
|
42 |
./gemma-2-2b-it.Q6_K.llamafile
|
43 |
```
|
|
|
157 |
|
158 |
This model works well with any quantization format. Q6\_K is the best
|
159 |
choice overall here. We tested that, with [our 27b Gemma2
|
160 |
+
llamafiles](https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile),
|
161 |
that the llamafile implementation of Gemma2 is able to to produce
|
162 |
identical responses to the Gemma2 model that's hosted by Google on
|
163 |
aistudio.google.com. Therefore we'd assume these 2b llamafiles are also
|
|
|
169 |
There are higher quality versions of this model available as llamafiles,
|
170 |
which require more memory.
|
171 |
|
172 |
+
- <https://huggingface.co/Mozilla/gemma-2-9b-it-llamafile>
|
173 |
+
- <https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile>
|
174 |
|
175 |
The 9B and 27B models were released a month earlier than 2B, so they're
|
176 |
packaged with an slightly older version of the llamafile software.
|