Edit model card

Model Details

This is meta-llama/Meta-Llama-3.1-8B-Instruct quantized with AutoAWQ in 4-bit. The model has been created, tested, and evaluated by The Kaitchup.

Details on quantization process, evaluation, and how to use the model here: The Best Quantization Methods to Run Llama 3.1 on Your GPU

  • Developed by: The Kaitchup
  • Language(s) (NLP): English
  • License: cc-by-4.0
Downloads last month
59,871
Safetensors
Model size
1.98B params
Tensor type
FP16
·
I32
·
Inference API
Input a message to start chatting with kaitchup/Meta-Llama-3.1-8B-Instruct-awq-4bit.

Collection including kaitchup/Meta-Llama-3.1-8B-Instruct-awq-4bit