Hampetiudo
commited on
Commit
•
dd7f181
1
Parent(s):
8750c44
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a
|
|
11 |
|
12 |
Original model: https://huggingface.co/ifable/gemma-2-Ifable-9B
|
13 |
|
14 |
-
All quants were made using the imatrix option (except BF16, that's the original
|
15 |
|
16 |
How to make your own quants:
|
17 |
|
|
|
11 |
|
12 |
Original model: https://huggingface.co/ifable/gemma-2-Ifable-9B
|
13 |
|
14 |
+
All quants were made using the imatrix option (except BF16, that's the original precision). The imatrix was generated with the dataset from [here](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c), using the BF16 GGUF with a context size of 8192 tokens (default is 512 but higher/same as model context size should improve quality) and 13 chunks.
|
15 |
|
16 |
How to make your own quants:
|
17 |
|