base model reference added
Browse files
README.md
CHANGED
@@ -1,3 +1,7 @@
|
|
|
|
|
|
|
|
|
|
1 |
This is the 8-bit quantized model of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device:
|
2 |
|
3 |
There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference:
|
@@ -54,5 +58,4 @@ output = llm(
|
|
54 |
echo=False, # Whether to echo the prompt
|
55 |
)
|
56 |
print(output)
|
57 |
-
```
|
58 |
-
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- mistralai/Ministral-8B-Instruct-2410
|
4 |
+
---
|
5 |
This is the 8-bit quantized model of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device:
|
6 |
|
7 |
There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference:
|
|
|
58 |
echo=False, # Whether to echo the prompt
|
59 |
)
|
60 |
print(output)
|
61 |
+
```
|
|