Update README.md
Browse files
README.md
CHANGED
@@ -247,6 +247,8 @@ extra_gated_button_content: Submit
|
|
247 |
|
248 |
## Llamacpp imatrix Quantizations of Ministral-8B-Instruct-2410
|
249 |
|
|
|
|
|
250 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization.
|
251 |
|
252 |
Original model: https://huggingface.co/mistralai/Ministral-8B-Instruct-2410
|
|
|
247 |
|
248 |
## Llamacpp imatrix Quantizations of Ministral-8B-Instruct-2410
|
249 |
|
250 |
+
# This is based on the officially merged safetensors for Ministral, however there may still be changes required to llama.cpp for full performance
|
251 |
+
|
252 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3930">b3930</a> for quantization.
|
253 |
|
254 |
Original model: https://huggingface.co/mistralai/Ministral-8B-Instruct-2410
|