Update README.md
Browse files
README.md
CHANGED
@@ -22,9 +22,6 @@ language:
|
|
22 |
- This model was quantized with the Auto-GPTQ library and dataset containing english and russian wikipedia articles. It has lower perplexity on russian data then other GPTQ models.
|
23 |
|
24 |
| Model | bits | Perplexity (russian wiki) |
|
25 |
-
| --- | --- | --- |
|
26 |
[gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) | 16bit | 6.2152 |
|
27 |
-
| --- | --- | --- |
|
28 |
[Granther/Gemma-2-9B-Instruct-4Bit-GPTQ](https://huggingface.co/Granther/Gemma-2-9B-Instruct-4Bit-GPTQ) | 4bit | 6.4966 |
|
29 |
-
| --- | --- | --- |
|
30 |
this model | 4bit | 6.3593 |
|
|
|
22 |
- This model was quantized with the Auto-GPTQ library and dataset containing english and russian wikipedia articles. It has lower perplexity on russian data then other GPTQ models.
|
23 |
|
24 |
| Model | bits | Perplexity (russian wiki) |
|
|
|
25 |
[gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) | 16bit | 6.2152 |
|
|
|
26 |
[Granther/Gemma-2-9B-Instruct-4Bit-GPTQ](https://huggingface.co/Granther/Gemma-2-9B-Instruct-4Bit-GPTQ) | 4bit | 6.4966 |
|
|
|
27 |
this model | 4bit | 6.3593 |
|