Commit
be49dba
1 Parent(s): b09496d

Adding Evaluation Results (#8)

Browse files

- Adding Evaluation Results (182710aa6b160d3288433822b8dae57ede2ed538)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -110,6 +110,98 @@ model-index:
110
  source:
111
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_13b
112
  name: Open LLM Leaderboard
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ---
114
 
115
  # orca_mini_v3_13b
@@ -283,3 +375,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
283
  |Winogrande (5-shot) |76.48|
284
  |GSM8k (5-shot) |13.12|
285
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  source:
111
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_13b
112
  name: Open LLM Leaderboard
113
+ - task:
114
+ type: text-generation
115
+ name: Text Generation
116
+ dataset:
117
+ name: IFEval (0-Shot)
118
+ type: HuggingFaceH4/ifeval
119
+ args:
120
+ num_few_shot: 0
121
+ metrics:
122
+ - type: inst_level_strict_acc and prompt_level_strict_acc
123
+ value: 28.97
124
+ name: strict accuracy
125
+ source:
126
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v3_13b
127
+ name: Open LLM Leaderboard
128
+ - task:
129
+ type: text-generation
130
+ name: Text Generation
131
+ dataset:
132
+ name: BBH (3-Shot)
133
+ type: BBH
134
+ args:
135
+ num_few_shot: 3
136
+ metrics:
137
+ - type: acc_norm
138
+ value: 25.55
139
+ name: normalized accuracy
140
+ source:
141
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v3_13b
142
+ name: Open LLM Leaderboard
143
+ - task:
144
+ type: text-generation
145
+ name: Text Generation
146
+ dataset:
147
+ name: MATH Lvl 5 (4-Shot)
148
+ type: hendrycks/competition_math
149
+ args:
150
+ num_few_shot: 4
151
+ metrics:
152
+ - type: exact_match
153
+ value: 1.89
154
+ name: exact match
155
+ source:
156
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v3_13b
157
+ name: Open LLM Leaderboard
158
+ - task:
159
+ type: text-generation
160
+ name: Text Generation
161
+ dataset:
162
+ name: GPQA (0-shot)
163
+ type: Idavidrein/gpqa
164
+ args:
165
+ num_few_shot: 0
166
+ metrics:
167
+ - type: acc_norm
168
+ value: 2.01
169
+ name: acc_norm
170
+ source:
171
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v3_13b
172
+ name: Open LLM Leaderboard
173
+ - task:
174
+ type: text-generation
175
+ name: Text Generation
176
+ dataset:
177
+ name: MuSR (0-shot)
178
+ type: TAUR-Lab/MuSR
179
+ args:
180
+ num_few_shot: 0
181
+ metrics:
182
+ - type: acc_norm
183
+ value: 17.11
184
+ name: acc_norm
185
+ source:
186
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v3_13b
187
+ name: Open LLM Leaderboard
188
+ - task:
189
+ type: text-generation
190
+ name: Text Generation
191
+ dataset:
192
+ name: MMLU-PRO (5-shot)
193
+ type: TIGER-Lab/MMLU-Pro
194
+ config: main
195
+ split: test
196
+ args:
197
+ num_few_shot: 5
198
+ metrics:
199
+ - type: acc
200
+ value: 14.5
201
+ name: accuracy
202
+ source:
203
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=pankajmathur/orca_mini_v3_13b
204
+ name: Open LLM Leaderboard
205
  ---
206
 
207
  # orca_mini_v3_13b
 
375
  |Winogrande (5-shot) |76.48|
376
  |GSM8k (5-shot) |13.12|
377
 
378
+
379
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
380
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__orca_mini_v3_13b)
381
+
382
+ | Metric |Value|
383
+ |-------------------|----:|
384
+ |Avg. |15.00|
385
+ |IFEval (0-Shot) |28.97|
386
+ |BBH (3-Shot) |25.55|
387
+ |MATH Lvl 5 (4-Shot)| 1.89|
388
+ |GPQA (0-shot) | 2.01|
389
+ |MuSR (0-shot) |17.11|
390
+ |MMLU-PRO (5-shot) |14.50|
391
+