|
--- |
|
license: apache-2.0 |
|
tags: |
|
- mistral |
|
- conversational |
|
- text-generation-inference |
|
base_model: UsernameJustAnother/Nemo-12B-Marlin-v5 |
|
library_name: transformers |
|
--- |
|
|
|
> [!WARNING] |
|
> **General Use Sampling:**<br> |
|
> Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#transformers) section. |
|
|
|
> [!NOTE] |
|
> **Best Samplers:**<br> |
|
> I found best success using the following for Nemo-12B-Marlin-v5:<br> |
|
> Temperature: `0.7`-`0.8`<br> |
|
> Top K: `-1`<br> |
|
> Min P: `0.05`<br> |
|
> Rep Penalty: `1.03` (Note, it is recommended to increase this as context length increases, I find `1.10` to be good at 16k+ context) |
|
|
|
Currently this is my favorite Mistral-Nemo finetune to be released. |
|
|
|
**Original Model:** [UsernameJustAnother/Nemo-12B-Marlin-v5](https://huggingface.co/UsernameJustAnother/Nemo-12B-Marlin-v5) (Thank you so much for your work ♥) |
|
|
|
**How to Use:** [llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
|
|
**Original Model License:** Apache 2.0 |
|
|
|
**Release Used:** [b3538](https://github.com/ggerganov/llama.cpp/releases/tag/b3538) |
|
|
|
# Quants |
|
PPL = Perplexity, lower is better<br> |
|
Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact. |
|
| Quant Type | Note | Size | |
|
| ---- | ---- | ---- | |
|
| [Q2_K](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q2_K.gguf) | +3.5199 ppl @ Llama-3-8B | 4.79 GB | |
|
| [Q3_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_S.gguf) | +1.6321 ppl @ Llama-3-8B | 5.53 GB | |
|
| [Q3_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_M.gguf) | +0.6569 ppl @ Llama-3-8B | 6.08 GB | |
|
| [Q3_K_L](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_L.gguf) | +0.5562 ppl @ Llama-3-8B | 6.56 GB | |
|
| [Q4_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_S.gguf) | +0.2689 ppl @ Llama-3-8B | 7.12 GB | |
|
| [Q4_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_M.gguf) | +0.1754 ppl @ Llama-3-8B | 7.48 GB | |
|
| [Q5_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_S.gguf) | +0.1049 ppl @ Llama-3-8B | 8.52 GB | |
|
| [Q5_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_M.gguf) | +0.0569 ppl @ Llama-3-8B | 8.73 GB | |
|
| [Q6_K](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q6_K.gguf) | +0.0217 ppl @ Llama-3-8B | 10.1 GB | |
|
| [Q8_0](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q8_0.gguf) | +0.0026 ppl @ Llama-3-8B | 13.00 GB | |