Model Information
Quantized version of meta-llama/Llama-3.1-8B-Instruct using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Asymmetrical Quantization
- Method AutoGPTQ
Quantization framework: Intel AutoRound
Note: this INT4 version of Llama-3.1-8B-Instruct has been quantized to run inference through CPU.
Replication Recipe
Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
python -m pip install <package> --upgrade
- accelerate==1.0.1
- auto_gptq==0.7.1
- neural_compressor==3.1
- torch==2.3.0+cpu
- torchaudio==2.5.0+cpu
- torchvision==0.18.0+cpu
- transformers==4.45.2
Step 2 Build Intel Autoround wheel from sources
python -m pip install git+https://github.com/intel/auto-round.git
Step 3 Script for Quantization
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "meta-llama/Llama-3.1-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym = 4, 128, False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym)
autoround.quantize()
output_dir = "./AutoRound/meta-llama_Llama-3.1-8B-Instruct-auto_gptq-int4-gs128-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
License
Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
- Downloads last month
- 0
Model tree for fbaldassarri/meta-llama_Llama-3.1-8B-Instruct-auto_gptq-int4-gs128-asym
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct