Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -5,4 +5,17 @@ After the initial experiment with chronoboros-33B it was evident that the merge
5
  This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B.
6
 
7
  Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing.
8
- It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B.
6
 
7
  Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing.
8
+ It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
9
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
10
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Henk717__airochronos-33B)
11
+
12
+ | Metric | Value |
13
+ |-----------------------|---------------------------|
14
+ | Avg. | 51.43 |
15
+ | ARC (25-shot) | 64.42 |
16
+ | HellaSwag (10-shot) | 85.21 |
17
+ | MMLU (5-shot) | 59.79 |
18
+ | TruthfulQA (0-shot) | 50.59 |
19
+ | Winogrande (5-shot) | 79.32 |
20
+ | GSM8K (5-shot) | 13.72 |
21
+ | DROP (3-shot) | 6.93 |