T145 commited on
Commit
b8b665f
·
verified ·
1 Parent(s): 14d9716

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

Please report any issues here: https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +110 -1
README.md CHANGED
@@ -25,6 +25,101 @@ language:
25
  base_model:
26
  - PrimeIntellect/INTELLECT-1
27
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
  # INTELLECT-1
30
 
@@ -152,4 +247,18 @@ If you use this model in your research, please cite it as follows:
152
  journal={arXiv preprint},
153
  year={2024}
154
  }
155
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  base_model:
26
  - PrimeIntellect/INTELLECT-1
27
  pipeline_tag: text-generation
28
+ model-index:
29
+ - name: INTELLECT-1-Instruct
30
+ results:
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: IFEval (0-Shot)
36
+ type: HuggingFaceH4/ifeval
37
+ args:
38
+ num_few_shot: 0
39
+ metrics:
40
+ - type: inst_level_strict_acc and prompt_level_strict_acc
41
+ value: 0.0
42
+ name: strict accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PrimeIntellect/INTELLECT-1-Instruct
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: BBH (3-Shot)
51
+ type: BBH
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc_norm
56
+ value: 1.75
57
+ name: normalized accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PrimeIntellect/INTELLECT-1-Instruct
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: MATH Lvl 5 (4-Shot)
66
+ type: hendrycks/competition_math
67
+ args:
68
+ num_few_shot: 4
69
+ metrics:
70
+ - type: exact_match
71
+ value: 0.0
72
+ name: exact match
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PrimeIntellect/INTELLECT-1-Instruct
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: GPQA (0-shot)
81
+ type: Idavidrein/gpqa
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 0.0
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PrimeIntellect/INTELLECT-1-Instruct
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MuSR (0-shot)
96
+ type: TAUR-Lab/MuSR
97
+ args:
98
+ num_few_shot: 0
99
+ metrics:
100
+ - type: acc_norm
101
+ value: 3.71
102
+ name: acc_norm
103
+ source:
104
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PrimeIntellect/INTELLECT-1-Instruct
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: MMLU-PRO (5-shot)
111
+ type: TIGER-Lab/MMLU-Pro
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 0.71
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PrimeIntellect/INTELLECT-1-Instruct
122
+ name: Open LLM Leaderboard
123
  ---
124
  # INTELLECT-1
125
 
 
247
  journal={arXiv preprint},
248
  year={2024}
249
  }
250
+ ```
251
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
252
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/PrimeIntellect__INTELLECT-1-Instruct-details)!
253
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=PrimeIntellect/INTELLECT-1-Instruct)!
254
+
255
+ | Metric |Value|
256
+ |-------------------|----:|
257
+ |Avg. | 1.03|
258
+ |IFEval (0-Shot) | 0.00|
259
+ |BBH (3-Shot) | 1.75|
260
+ |MATH Lvl 5 (4-Shot)| 0.00|
261
+ |GPQA (0-shot) | 0.00|
262
+ |MuSR (0-shot) | 3.71|
263
+ |MMLU-PRO (5-shot) | 0.71|
264
+