InferenceIllusionist
commited on
Commit
•
653974e
1
Parent(s):
fc3c9aa
Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: apache-2.0
|
|
13 |
# mini-magnum-12b-v1.1-iMat-GGUF
|
14 |
|
15 |
> [!WARNING]
|
16 |
-
><b>Important Note:</b> Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438) or newer. Text-generation-web-ui (Ooba) is also working as of 7/23.
|
17 |
|
18 |
Quantized from mini-magnum-12b-v1.1 fp16
|
19 |
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
|
|
|
13 |
# mini-magnum-12b-v1.1-iMat-GGUF
|
14 |
|
15 |
> [!WARNING]
|
16 |
+
><b>Important Note:</b> Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438) or newer. Text-generation-web-ui (Ooba) is also working as of 7/23. Kobold.cpp working as of [v1.71](https://github.com/LostRuins/koboldcpp/releases/tag/v1.71). </b>
|
17 |
|
18 |
Quantized from mini-magnum-12b-v1.1 fp16
|
19 |
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
|