Hampetiudo's picture
Update README.md
8750c44 verified
|
raw
history blame
915 Bytes
metadata
license: gemma
base_model:
  - ifable/gemma-2-Ifable-9B
pipeline_tag: text-generation

Llama.cpp imatrix quants of gemma-2-Ifable-9B

Using llama.cpp release b3804 for quantization.

Original model: https://huggingface.co/ifable/gemma-2-Ifable-9B

All quants were made using the imatrix option (except BF16, that's the original model). The imatrix was generated with the dataset from here, using the BF16 GGUF with a context size of 8192 tokens (default is 512 but higher/same as model context size should improve quality) and 13 chunks.

How to make your own quants:

https://github.com/ggerganov/llama.cpp/tree/master/examples/imatrix

https://github.com/ggerganov/llama.cpp/tree/master/examples/quantize