jojo1899's picture
Quantized model using updated packages
1195ae5
metadata
license: llama2

This is an INT4 quantized version of the Llama-2-13b-chat-hf model. The Python packages used in creating this model are as follows:

onnx==1.16.1
onnxruntime-directml==1.20.0
onnxruntime-genai-directml==0.4.0
torch==2.5.1
transformers==4.45.2

This quantized model is created using the following command:

python -m onnxruntime_genai.models.builder -m meta-llama/Llama-2-13b-chat-hf -e dml -p int4 --extra_options {"int4_block_size"=128} -o ./Llama-2-13b-chat-hf-onnx-int4

onnxruntime_genai.models.builder quantizes the model using MatMul4BitsQuantizer from matmul_4bits_quantizer.py of onnxruntime/quantization/ with the "DEFAULT" method.