File size: 318 Bytes
b531779
 
 
0016a03
 
1
2
3
4
5
6
---

license: llama3.2
---


ONNX version of [Llama 3.2 Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B) model, quantized for inference on the [Snapdragon 8 Elite](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-elite-mobile-platform) NPU.