Meta's LLaMA 7B - AWQ GGUF

These files are in GGUF format.

The model was converted by the combination of llama.cpp and quantization method AWQ

How to use models in llama.cpp

./main -m ggml-model-q4_0-awq.gguf -n 128 --prompt "Once upon a time"

Please refer to the instructions at the PR

Downloads last month
8
GGUF
Model size
6.74B params
Architecture
llama

2-bit

Inference API
Inference API (serverless) has been turned off for this model.