Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -80,4 +80,17 @@ Manticore was fine-tuned from the base model LlaMa 13B, please refer to its mode
|
|
80 |
|
81 |
## Examples
|
82 |
|
83 |
-
TBD
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
|
81 |
## Examples
|
82 |
|
83 |
+
TBD
|
84 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
85 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__manticore-30b-chat-pyg-alpha)
|
86 |
+
|
87 |
+
| Metric | Value |
|
88 |
+
|-----------------------|---------------------------|
|
89 |
+
| Avg. | 55.2 |
|
90 |
+
| ARC (25-shot) | 64.16 |
|
91 |
+
| HellaSwag (10-shot) | 84.38 |
|
92 |
+
| MMLU (5-shot) | 57.49 |
|
93 |
+
| TruthfulQA (0-shot) | 51.57 |
|
94 |
+
| Winogrande (5-shot) | 79.48 |
|
95 |
+
| GSM8K (5-shot) | 16.07 |
|
96 |
+
| DROP (3-shot) | 33.22 |
|