Adding Evaluation Results

#18
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -200,3 +200,17 @@ Commodity cost was ~$400.
200
  primaryClass={cs.AI}
201
  }
202
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
200
  primaryClass={cs.AI}
201
  }
202
  ```
203
+
204
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
205
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__Mistral-7B-OpenOrca)
206
+
207
+ | Metric | Value |
208
+ |-----------------------|---------------------------|
209
+ | Avg. | 54.51 |
210
+ | ARC (25-shot) | 64.08 |
211
+ | HellaSwag (10-shot) | 83.99 |
212
+ | MMLU (5-shot) | 62.24 |
213
+ | TruthfulQA (0-shot) | 53.05 |
214
+ | Winogrande (5-shot) | 77.74 |
215
+ | GSM8K (5-shot) | 19.94 |
216
+ | DROP (3-shot) | 20.53 |