Adding Evaluation Results
#10
by
dragonSwing
- opened
README.md
CHANGED
@@ -201,3 +201,17 @@ print(outputs[0]["generated_text"])
|
|
201 |
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
|
202 |
</a>
|
203 |
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
201 |
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
|
202 |
</a>
|
203 |
</p>
|
204 |
+
|
205 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
206 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralBeagle14-7B)
|
207 |
+
|
208 |
+
| Metric |Value|
|
209 |
+
|---------------------------------|----:|
|
210 |
+
|Avg. |74.74|
|
211 |
+
|AI2 Reasoning Challenge (25-Shot)|72.95|
|
212 |
+
|HellaSwag (10-Shot) |88.34|
|
213 |
+
|MMLU (5-Shot) |64.55|
|
214 |
+
|TruthfulQA (0-shot) |69.93|
|
215 |
+
|Winogrande (5-shot) |82.40|
|
216 |
+
|GSM8k (5-shot) |70.28|
|
217 |
+
|