LLaMA-7b-AWQ-GGUF / README.md
namtran's picture
Update README.md
65d7f37
|
raw
history blame
642 Bytes
metadata
inference: false
license: other
model_type: llama

Meta's LLaMA 7B - AWQ GGUF

These files are in GGUF format.

The model was converted by the combination of llama.cpp and quantization method AWQ

How to use models in llama.cpp

./main -m ggml-model-q4_0-awq.gguf -n 128 --prompt "Once upon a time"

Please refer to the instructions at the PR