Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,20 @@ extra_gated_description: If you want to learn more about how we process your per
|
|
16 |
|
17 |
# Model Card for Mistral-Nemo-Instruct-2407
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
20 |
|
21 |
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
|
|
|
16 |
|
17 |
# Model Card for Mistral-Nemo-Instruct-2407
|
18 |
|
19 |
+
[This is the **6bpw** EXL2 Quant of the Update for Mistral Nemo as of 07/31. Find the original model here.](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)
|
20 |
+
|
21 |
+
Not sure how much this differs from the unsloth version (it didn't look like Unsloth had theirs updated yet so I've quanted the one produced my Mistral here). Commits indicate a chat template was fixed and some fixes to the eos token/clean_up_tokenization_spaces. So if you had issues with Nemo previously, this may fix some of this. May.
|
22 |
+
<br>
|
23 |
+
|
24 |
+
[8BPW Quant here](https://huggingface.co/Statuo/Mistral-Nemo-Instruct-2407-EXL2-8bpw)
|
25 |
+
<br>
|
26 |
+
|
27 |
+
[6bpw Quant here](https://huggingface.co/Statuo/Mistral-Nemo-Instruct-2407-EXL2-6bpw)
|
28 |
+
|
29 |
+
<br>
|
30 |
+
|
31 |
+
---
|
32 |
+
|
33 |
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407). Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
34 |
|
35 |
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
|