Adding Evaluation Results
#6
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -18,4 +18,17 @@ Due to the influence of Pygmalion, this model will very likely generate content
|
|
18 |
The specific prompting is unknown, but try Pygmalion's prompt styles first,
|
19 |
then a mix of the two to see what brings most interesting results.
|
20 |
|
21 |
-
Treat this as a normal HF Transformers model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
The specific prompting is unknown, but try Pygmalion's prompt styles first,
|
19 |
then a mix of the two to see what brings most interesting results.
|
20 |
|
21 |
+
Treat this as a normal HF Transformers model.
|
22 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
23 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Pygmalion-Vicuna-1.1-7b)
|
24 |
+
|
25 |
+
| Metric | Value |
|
26 |
+
|-----------------------|---------------------------|
|
27 |
+
| Avg. | 43.35 |
|
28 |
+
| ARC (25-shot) | 52.82 |
|
29 |
+
| HellaSwag (10-shot) | 78.66 |
|
30 |
+
| MMLU (5-shot) | 43.61 |
|
31 |
+
| TruthfulQA (0-shot) | 42.21 |
|
32 |
+
| Winogrande (5-shot) | 71.98 |
|
33 |
+
| GSM8K (5-shot) | 6.22 |
|
34 |
+
| DROP (3-shot) | 7.97 |
|