Official AQLM quantization of meta-llama/Llama-2-7b-hf
.
For this quantization, we used 1 codebook of 16 bits.
Selected evaluation results for this and other models:
Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
---|---|---|---|---|
Llama-2-7b (THIS) | 1x16 | 6.31 | 2.4 | Link |
Llama-2-7b | 2x8 | 7.98 | 2.2 | Link |
Llama-2-7b | 8x8 | 7.83 | 2.2 | Link |
Llama-2-13b | 1x16 | 5.41 | 4.1 | Link |
Llama-2-70b | 1x16 | 3.96 | 18.8 | Link |
Mixtral-8x7b | 1x16 | 4.37 | 12.6 | Link |
To learn more about the inference and how to quantize models yourself, please refer to the official GitHub repo.