Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,14 @@ This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com
|
|
14 |
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
|
15 |
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
## 🏆 Evaluation
|
18 |
|
19 |
Beyonder-4x7B-v2 is competitive with Mixtral-8x7B-Instruct-v0.1 on the Open LLM Leaderboard, while only having 4 experts instead of 8.
|
|
|
14 |
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
|
15 |
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
16 |
|
17 |
+
## ⚡ Quantized models
|
18 |
+
|
19 |
+
Thanks to TheBloke for the quantized models:
|
20 |
+
|
21 |
+
* GGUF: https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF
|
22 |
+
* AWQ: https://huggingface.co/TheBloke/Beyonder-4x7B-v2-AWQ
|
23 |
+
* GPTQ: https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ
|
24 |
+
|
25 |
## 🏆 Evaluation
|
26 |
|
27 |
Beyonder-4x7B-v2 is competitive with Mixtral-8x7B-Instruct-v0.1 on the Open LLM Leaderboard, while only having 4 experts instead of 8.
|