Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -161,3 +161,17 @@ response_str = tokenizer.batch_decode(response, skip_special_tokens=False, clean
|
|
161 |
print(response_str)
|
162 |
```
|
163 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
161 |
print(response_str)
|
162 |
```
|
163 |
|
164 |
+
|
165 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
166 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_wenge-research__yayi-7b-llama2)
|
167 |
+
|
168 |
+
| Metric | Value |
|
169 |
+
|-----------------------|---------------------------|
|
170 |
+
| Avg. | 43.62 |
|
171 |
+
| ARC (25-shot) | 54.78 |
|
172 |
+
| HellaSwag (10-shot) | 77.94 |
|
173 |
+
| MMLU (5-shot) | 41.35 |
|
174 |
+
| TruthfulQA (0-shot) | 44.02 |
|
175 |
+
| Winogrande (5-shot) | 74.51 |
|
176 |
+
| GSM8K (5-shot) | 6.67 |
|
177 |
+
| DROP (3-shot) | 6.04 |
|