File size: 840 Bytes
0d2a2fb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: llama3
---
Quantized Llama 3 70B Instruct to Q40 format supported by [Distributed Llama](https://github.com/b4rtaz/distributed-llama).
## License
Before download this repository please accept [Llama 3 Community License](https://llama.meta.com/llama3/license/).
## How to run
1. Clone this repository.
2. Clone Distributed Llama:
```sh
git clone https://github.com/b4rtaz/distributed-llama.git
```
3. Build Distributed Llama:
```sh
make dllama
```
4. Run Distributed Llama:
```
sudo nice -n -20 ./dllama inference --model /path/to/dllama_model_llama3-70b-instruct_q40.m --tokenizer /path/to/dllama_tokenizer_llama3.t --weights-float-type q40 --buffer-float-type q80 --prompt "Hello world" --steps 16 --nthreads 4
```
### Chat Template
Please keep in mind this model expects the prompt to use the chat template of llama 3. |