AQLM+PV
Collection
Official AQLM quantizations for "PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression": https://arxiv.org/abs/2405.14852
•
25 items
•
Updated
•
19
Official AQLM quantization of meta-llama/Llama-3.2-1B finetuned with PV-Tuning.
For this quantization, we used 2 codebooks of 8 bits and groupsize of 8.
Results:
Model | Quantization | MMLU (5-shot) | ArcC | ArcE | Hellaswag | PiQA | Winogrande | Model size, Gb |
---|---|---|---|---|---|---|---|---|
meta-llama/Llama-3.2-1B | fp16 | 0.3195 | 0.3123 | 0.6553 | 0.4772 | 0.7448 | 0.6054 | 2.5 |
2x8g8 | 0.2465 | 0.2713 | 0.5896 | 0.4034 | 0.7067 | 0.5564 | 0.8 |
Base model
meta-llama/Llama-3.2-1B