gemma-2-9b-GGUF / README.md
fedric95's picture
Update README.md
ae13555 verified
metadata
base_model: google/gemma-2-9b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
  - conversational
quantized_by: fedric95
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
  To access Gemma on Hugging Face, you’re required to review and agree to
  Google’s usage license. To do this, please ensure you’re logged in to Hugging
  Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license

Llamacpp Quantizations of Meta-Llama-3.1-8B

Using llama.cpp release b3583 for quantization.

Original model: https://huggingface.co/google/gemma-2-9b

Download a file (not the whole branch) from below:

Filename Quant type File Size Perplexity (wikitext-2-raw-v1.test)
gemma-2-9b.FP32.gguf FP32 37.00GB 6.9209 +/- 0.04660
gemma-2-9b-Q8_0.gguf Q8_0 9.83GB 6.9222 +/- 0.04660
gemma-2-9b-Q6_K.gguf Q6_K 7.59GB 6.9353 +/- 0.04675
gemma-2-9b-Q5_K_M.gguf Q5_K_M 6.65GB 6.9571 +/- 0.04687
gemma-2-9b-Q5_K_S.gguf Q5_K_S 6.48GB 6.9623 +/- 0.04690
gemma-2-9b-Q4_K_M.gguf Q4_K_M 5.76GB 7.0220 +/- 0.04737
gemma-2-9b-Q4_K_S.gguf Q4_K_S 5.48GB 7.0622 +/- 0.04777
gemma-2-9b-Q3_K_L.gguf Q3_K_L 5.13GB 7.2144 +/- 0.04910
gemma-2-9b-Q3_K_M.gguf Q3_K_M 4.76GB 7.2849 +/- 0.04970
gemma-2-9b-Q3_K_S.gguf Q3_K_S 4.34GB 7.6869 +/- 0.05373
gemma-2-9b-Q2_K.gguf Q2_K 3.81GB 8.7979 +/- 0.06191

Benchmark Results

Results have been computed using:

hellaswage_val_full

winogrande-debiased-eval

mmlu-validation

Benchmark Quant type Metric
WinoGrande (0-shot) Q8_0 74.4278 +/- 1.2261
WinoGrande (0-shot) Q4_K_M 74.8224 +/- 1.2198
WinoGrande (0-shot) Q3_K_M 74.1910 +/- 1.2298
WinoGrande (0-shot) Q3_K_S 72.6125 +/- 1.2533
WinoGrande (0-shot) Q2_K 71.4286 +/- 1.2697
HellaSwag (0-shot) Q8_0 78.39075881
HellaSwag (0-shot) Q4_K_M 77.87293368
HellaSwag (0-shot) Q3_K_M 76.64807807
HellaSwag (0-shot) Q3_K_S 76.08046206
HellaSwag (0-shot) Q2_K 73.07309301
MMLU (0-shot) Q8_0 42.5065 +/- 1.2569
MMLU (0-shot) Q4_K_M 42.5065 +/- 1.2569
MMLU (0-shot) Q3_K_M 41.3437 +/- 1.2520
MMLU (0-shot) Q3_K_S 40.5685 +/- 1.2484
MMLU (0-shot) Q2_K 38.1137 +/- 1.2348

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q4_K_M.gguf" --local-dir ./

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q8_0.gguf/*" --local-dir gemma-2-9b-Q8_0

You can either specify a new local-dir (gemma-2-9b-Q8_0) or download them all in place (./)

Reproducibility

https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638