Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -75,4 +75,17 @@ The following hyperparameters were used during training:
|
|
75 |
- Transformers 4.36.2
|
76 |
- Pytorch 2.1.2+cu121
|
77 |
- Datasets 2.14.6
|
78 |
-
- Tokenizers 0.15.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
- Transformers 4.36.2
|
76 |
- Pytorch 2.1.2+cu121
|
77 |
- Datasets 2.14.6
|
78 |
+
- Tokenizers 0.15.2
|
79 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
80 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_amu__r-zephyr-7b-beta-qlora)
|
81 |
+
|
82 |
+
| Metric |Value|
|
83 |
+
|---------------------------------|----:|
|
84 |
+
|Avg. |62.70|
|
85 |
+
|AI2 Reasoning Challenge (25-Shot)|63.05|
|
86 |
+
|HellaSwag (10-Shot) |85.38|
|
87 |
+
|MMLU (5-Shot) |63.10|
|
88 |
+
|TruthfulQA (0-shot) |46.32|
|
89 |
+
|Winogrande (5-shot) |79.32|
|
90 |
+
|GSM8k (5-shot) |39.04|
|
91 |
+
|