File size: 1,778 Bytes
8b7a9eb
 
 
 
ada55a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b7a9eb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
base_model:
- mistralai/Ministral-8B-Instruct-2410
---
This is the 8-bit quantized model of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device:

There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference:

1. Install
```
git clone https://github.com/ggerganov/llama.cpp
!mkdir llama.cpp/build && cd llama.cpp/build && cmake .. && cmake --build . --config Release
```

2. Inference
```
./llama.cpp/build/bin/llama-cli -m ./ministral-8b_Q8_0.gguf -cnv -p "You are a helpful assistant"
```

Here, you can interact with model from your terminal.


**Alternatively**, we can use python binding of the `llama.cpp` to run the model on both CPU and GPU.
1. Install
```
pip install --no-cache-dir llama-cpp-python==0.2.85 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu122
```

2. Inference on CPU
```
from llama_cpp import Llama

model_path = "./ministral-8b_Q8_0.gguf"
llm = Llama(model_path=model_path, n_threads=8, verbose=False)

prompt = "What should I do when my eyes are dry?"
output = llm(
        prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
        max_tokens=4096,
        stop=["<|end|>"],
        echo=False,  # Whether to echo the prompt
)
print(output)
```

3. Inference on GPU
```
from llama_cpp import Llama

model_path = "./ministral-8b_Q8_0.gguf"
llm = Llama(model_path=model_path, n_threads=8, n_gpu_layers=-1, verbose=False)

prompt = "What should I do when my eyes are dry?"
output = llm(
        prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
        max_tokens=4096,
        stop=["<|end|>"],
        echo=False,  # Whether to echo the prompt
)
print(output)
```