Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ Let us know what you think of the model! The 8B and 12B versions of RPMax had gr
|
|
32 |
|
33 |
The model is available in quantized formats:
|
34 |
|
35 |
-
We recommend using full weights or GPTQ
|
36 |
|
37 |
* **FP16**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1
|
38 |
* **GPTQ_Q4**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1-GPTQ_Q4
|
|
|
32 |
|
33 |
The model is available in quantized formats:
|
34 |
|
35 |
+
We recommend using full weights or GPTQ. GGUF provided by https://huggingface.co/mradermacher/Llama-3.1-70B-ArliAI-RPMax-v1.1-GGUF
|
36 |
|
37 |
* **FP16**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1
|
38 |
* **GPTQ_Q4**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1-GPTQ_Q4
|