starble-dev
commited on
Commit
•
610a041
1
Parent(s):
a215768
Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as
|
|
39 |
| [Q3_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_S.gguf) | +1.6321 ppl @ Llama-3-8B | 5.53 GB |
|
40 |
| [Q3_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_M.gguf) | +0.6569 ppl @ Llama-3-8B | 6.08 GB |
|
41 |
| [Q3_K_L](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_L.gguf) | +0.5562 ppl @ Llama-3-8B | 6.56 GB |
|
42 |
-
| [Q4_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_S.gguf) | +0.
|
43 |
| [Q4_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_M.gguf) | +0.1754 ppl @ Llama-3-8B | 7.48 GB |
|
44 |
| [Q5_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_S.gguf) | +0.1049 ppl @ Llama-3-8B | 8.52 GB |
|
45 |
| [Q5_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_M.gguf) | +0.0569 ppl @ Llama-3-8B | 8.73 GB |
|
|
|
39 |
| [Q3_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_S.gguf) | +1.6321 ppl @ Llama-3-8B | 5.53 GB |
|
40 |
| [Q3_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_M.gguf) | +0.6569 ppl @ Llama-3-8B | 6.08 GB |
|
41 |
| [Q3_K_L](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_L.gguf) | +0.5562 ppl @ Llama-3-8B | 6.56 GB |
|
42 |
+
| [Q4_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_S.gguf) | +0.2689 ppl @ Llama-3-8B | 7.12 GB |
|
43 |
| [Q4_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_M.gguf) | +0.1754 ppl @ Llama-3-8B | 7.48 GB |
|
44 |
| [Q5_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_S.gguf) | +0.1049 ppl @ Llama-3-8B | 8.52 GB |
|
45 |
| [Q5_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_M.gguf) | +0.0569 ppl @ Llama-3-8B | 8.73 GB |
|