Adding Evaluation Results
#16
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -227,3 +227,17 @@ This work would not have been possible without the support of [Stability AI](htt
|
|
227 |
url = {https://doi.org/10.5281/zenodo.7790115}
|
228 |
}
|
229 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
227 |
url = {https://doi.org/10.5281/zenodo.7790115}
|
228 |
}
|
229 |
```
|
230 |
+
|
231 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
232 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__stable-vicuna-13B-HF)
|
233 |
+
|
234 |
+
| Metric | Value |
|
235 |
+
|-----------------------|---------------------------|
|
236 |
+
| Avg. | 45.36 |
|
237 |
+
| ARC (25-shot) | 53.33 |
|
238 |
+
| HellaSwag (10-shot) | 78.5 |
|
239 |
+
| MMLU (5-shot) | 50.29 |
|
240 |
+
| TruthfulQA (0-shot) | 48.38 |
|
241 |
+
| Winogrande (5-shot) | 75.22 |
|
242 |
+
| GSM8K (5-shot) | 4.09 |
|
243 |
+
| DROP (3-shot) | 7.74 |
|