File size: 642 Bytes
6d0c035 dd6dd73 6d0c035 dd6dd73 6d0c035 dd6dd73 6d0c035 65d7f37 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
---
inference: false
license: other
model_type: llama
---
# Meta's LLaMA 7B - AWQ GGUF
These files are in GGUF format.
- Model creator: [Meta](https://huggingface.co/none)
- Original model: [LLaMA 7B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
The model was converted by the combination of [llama.cpp](https://github.com/ggerganov/llama.cpp) and quantization method [AWQ](https://github.com/mit-han-lab/llm-awq)
## How to use models in `llama.cpp`
```
./main -m ggml-model-q4_0-awq.gguf -n 128 --prompt "Once upon a time"
```
Please refer to the instructions at the [PR](https://github.com/ggerganov/llama.cpp/pull/4593)
|