File size: 1,738 Bytes
6f806a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
base_model:
- epfl-llm/meditron-7b
---
This model the 3-bit quantized version of the [Meditron-7b](https://huggingface.co/epfl-llm/meditron-7b).Please follow the following instruction to run the model on your device:
There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference
1. Install
```
git clone https://github.com/ggerganov/llama.cpp
!mkdir llama.cpp/build && cd llama.cpp/build && cmake .. && cmake --build . --config Release
```
2. Inference
```
./llama.cpp/build/bin/llama-cli -m ./meditron-7b_Q3_K_M.gguf -cnv -p "You are a helpful assistant"
```
Here, you can interact with model from your terminal.
**Alternatively**, we can use python binding of the `llama.cpp` to run the model on both CPU and GPU.
1. Install
```
pip install --no-cache-dir llama-cpp-python==0.2.85 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu122
```
2. Inference on CPU
```
from llama_cpp import Llama
model_path = "./meditron-7b_Q3_K_M.gguf"
llm = Llama(model_path=model_path, n_threads=8, verbose=False)
prompt = "What should I do when my eyes are dry?"
output = llm(
prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
max_tokens=4096,
stop=["<|end|>"],
echo=False, # Whether to echo the prompt
)
print(output)
```
3. Inference on GPU
```
from llama_cpp import Llama
model_path = "./meditron-7b_Q3_K_M.gguf"
llm = Llama(model_path=model_path, n_threads=8, n_gpu_layers=-1, verbose=False)
prompt = "What should I do when my eyes are dry?"
output = llm(
prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
max_tokens=4096,
stop=["<|end|>"],
echo=False, # Whether to echo the prompt
)
print(output)
``` |