Adding Evaluation Results
#4
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -277,4 +277,17 @@ current naming convention (70M, 160M, etc.) is based on total parameter count.
|
|
277 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
|
278 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
|
279 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
|
280 |
-
</figure>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
277 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
|
278 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
|
279 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
|
280 |
+
</figure>
|
281 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
282 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-160m)
|
283 |
+
|
284 |
+
| Metric | Value |
|
285 |
+
|-----------------------|---------------------------|
|
286 |
+
| Avg. | 25.36 |
|
287 |
+
| ARC (25-shot) | 22.78 |
|
288 |
+
| HellaSwag (10-shot) | 30.34 |
|
289 |
+
| MMLU (5-shot) | 24.95 |
|
290 |
+
| TruthfulQA (0-shot) | 44.26 |
|
291 |
+
| Winogrande (5-shot) | 51.54 |
|
292 |
+
| GSM8K (5-shot) | 0.23 |
|
293 |
+
| DROP (3-shot) | 3.45 |
|