leaderboard-pr-bot
commited on
Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -1,20 +1,123 @@
|
|
1 |
---
|
2 |
-
inference: false
|
3 |
language:
|
4 |
- de
|
5 |
-
library_name: transformers
|
6 |
license: apache-2.0
|
7 |
-
|
8 |
-
model_name: EM German
|
9 |
-
model_type: mistral
|
10 |
-
pipeline_tag: text-generation
|
11 |
-
prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:'
|
12 |
tags:
|
13 |
- pytorch
|
14 |
- german
|
15 |
- deutsch
|
16 |
- mistral
|
17 |
- leolm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
---
|
19 |
![EM Logo](em_model_logo_web.jpeg)
|
20 |
|
@@ -139,4 +242,17 @@ For detailed feedback & feature requests, please open an issue or get in contact
|
|
139 |
|
140 |
# Disclaimer:
|
141 |
|
142 |
-
I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- de
|
|
|
4 |
license: apache-2.0
|
5 |
+
library_name: transformers
|
|
|
|
|
|
|
|
|
6 |
tags:
|
7 |
- pytorch
|
8 |
- german
|
9 |
- deutsch
|
10 |
- mistral
|
11 |
- leolm
|
12 |
+
model_name: EM German
|
13 |
+
inference: false
|
14 |
+
model_creator: jphme
|
15 |
+
model_type: mistral
|
16 |
+
pipeline_tag: text-generation
|
17 |
+
prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:'
|
18 |
+
model-index:
|
19 |
+
- name: em_german_leo_mistral
|
20 |
+
results:
|
21 |
+
- task:
|
22 |
+
type: text-generation
|
23 |
+
name: Text Generation
|
24 |
+
dataset:
|
25 |
+
name: AI2 Reasoning Challenge (25-Shot)
|
26 |
+
type: ai2_arc
|
27 |
+
config: ARC-Challenge
|
28 |
+
split: test
|
29 |
+
args:
|
30 |
+
num_few_shot: 25
|
31 |
+
metrics:
|
32 |
+
- type: acc_norm
|
33 |
+
value: 52.82
|
34 |
+
name: normalized accuracy
|
35 |
+
source:
|
36 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jphme/em_german_leo_mistral
|
37 |
+
name: Open LLM Leaderboard
|
38 |
+
- task:
|
39 |
+
type: text-generation
|
40 |
+
name: Text Generation
|
41 |
+
dataset:
|
42 |
+
name: HellaSwag (10-Shot)
|
43 |
+
type: hellaswag
|
44 |
+
split: validation
|
45 |
+
args:
|
46 |
+
num_few_shot: 10
|
47 |
+
metrics:
|
48 |
+
- type: acc_norm
|
49 |
+
value: 78.03
|
50 |
+
name: normalized accuracy
|
51 |
+
source:
|
52 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jphme/em_german_leo_mistral
|
53 |
+
name: Open LLM Leaderboard
|
54 |
+
- task:
|
55 |
+
type: text-generation
|
56 |
+
name: Text Generation
|
57 |
+
dataset:
|
58 |
+
name: MMLU (5-Shot)
|
59 |
+
type: cais/mmlu
|
60 |
+
config: all
|
61 |
+
split: test
|
62 |
+
args:
|
63 |
+
num_few_shot: 5
|
64 |
+
metrics:
|
65 |
+
- type: acc
|
66 |
+
value: 50.03
|
67 |
+
name: accuracy
|
68 |
+
source:
|
69 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jphme/em_german_leo_mistral
|
70 |
+
name: Open LLM Leaderboard
|
71 |
+
- task:
|
72 |
+
type: text-generation
|
73 |
+
name: Text Generation
|
74 |
+
dataset:
|
75 |
+
name: TruthfulQA (0-shot)
|
76 |
+
type: truthful_qa
|
77 |
+
config: multiple_choice
|
78 |
+
split: validation
|
79 |
+
args:
|
80 |
+
num_few_shot: 0
|
81 |
+
metrics:
|
82 |
+
- type: mc2
|
83 |
+
value: 50.19
|
84 |
+
source:
|
85 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jphme/em_german_leo_mistral
|
86 |
+
name: Open LLM Leaderboard
|
87 |
+
- task:
|
88 |
+
type: text-generation
|
89 |
+
name: Text Generation
|
90 |
+
dataset:
|
91 |
+
name: Winogrande (5-shot)
|
92 |
+
type: winogrande
|
93 |
+
config: winogrande_xl
|
94 |
+
split: validation
|
95 |
+
args:
|
96 |
+
num_few_shot: 5
|
97 |
+
metrics:
|
98 |
+
- type: acc
|
99 |
+
value: 73.48
|
100 |
+
name: accuracy
|
101 |
+
source:
|
102 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jphme/em_german_leo_mistral
|
103 |
+
name: Open LLM Leaderboard
|
104 |
+
- task:
|
105 |
+
type: text-generation
|
106 |
+
name: Text Generation
|
107 |
+
dataset:
|
108 |
+
name: GSM8k (5-shot)
|
109 |
+
type: gsm8k
|
110 |
+
config: main
|
111 |
+
split: test
|
112 |
+
args:
|
113 |
+
num_few_shot: 5
|
114 |
+
metrics:
|
115 |
+
- type: acc
|
116 |
+
value: 5.61
|
117 |
+
name: accuracy
|
118 |
+
source:
|
119 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jphme/em_german_leo_mistral
|
120 |
+
name: Open LLM Leaderboard
|
121 |
---
|
122 |
![EM Logo](em_model_logo_web.jpeg)
|
123 |
|
|
|
242 |
|
243 |
# Disclaimer:
|
244 |
|
245 |
+
I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files.
|
246 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
247 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jphme__em_german_leo_mistral)
|
248 |
+
|
249 |
+
| Metric |Value|
|
250 |
+
|---------------------------------|----:|
|
251 |
+
|Avg. |51.69|
|
252 |
+
|AI2 Reasoning Challenge (25-Shot)|52.82|
|
253 |
+
|HellaSwag (10-Shot) |78.03|
|
254 |
+
|MMLU (5-Shot) |50.03|
|
255 |
+
|TruthfulQA (0-shot) |50.19|
|
256 |
+
|Winogrande (5-shot) |73.48|
|
257 |
+
|GSM8k (5-shot) | 5.61|
|
258 |
+
|