theprint commited on
Commit
76c856a
1 Parent(s): 5a41ce6

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (b61798721d27530b1f704445fdc67fe812f61c23)

Files changed (1) hide show
  1. README.md +110 -2
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
3
  language:
4
  - en
5
  license: apache-2.0
@@ -13,8 +12,104 @@ tags:
13
  - logic
14
  - math
15
  - python
 
16
  datasets:
17
  - theprint/CleverBoi-Data-20k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
  <img src="https://huggingface.co/theprint/CleverBoi-Gemma-2-9B/resolve/main/cleverboi.png"/>
20
 
@@ -33,4 +128,17 @@ This model has been fine tuned for 1 epoch on the CleverBoi-Data-20k data set.
33
 
34
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
35
 
36
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
  license: apache-2.0
 
12
  - logic
13
  - math
14
  - python
15
+ base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
16
  datasets:
17
  - theprint/CleverBoi-Data-20k
18
+ model-index:
19
+ - name: CleverBoi-Nemo-12B-v2
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: IFEval (0-Shot)
26
+ type: HuggingFaceH4/ifeval
27
+ args:
28
+ num_few_shot: 0
29
+ metrics:
30
+ - type: inst_level_strict_acc and prompt_level_strict_acc
31
+ value: 20.46
32
+ name: strict accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Nemo-12B-v2
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: BBH (3-Shot)
41
+ type: BBH
42
+ args:
43
+ num_few_shot: 3
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 31.65
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Nemo-12B-v2
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MATH Lvl 5 (4-Shot)
56
+ type: hendrycks/competition_math
57
+ args:
58
+ num_few_shot: 4
59
+ metrics:
60
+ - type: exact_match
61
+ value: 8.61
62
+ name: exact match
63
+ source:
64
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Nemo-12B-v2
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: GPQA (0-shot)
71
+ type: Idavidrein/gpqa
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 8.5
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Nemo-12B-v2
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MuSR (0-shot)
86
+ type: TAUR-Lab/MuSR
87
+ args:
88
+ num_few_shot: 0
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 11.43
92
+ name: acc_norm
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Nemo-12B-v2
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU-PRO (5-shot)
101
+ type: TIGER-Lab/MMLU-Pro
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 24.76
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=theprint/CleverBoi-Nemo-12B-v2
112
+ name: Open LLM Leaderboard
113
  ---
114
  <img src="https://huggingface.co/theprint/CleverBoi-Gemma-2-9B/resolve/main/cleverboi.png"/>
115
 
 
128
 
129
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
130
 
131
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
132
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
133
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theprint__CleverBoi-Nemo-12B-v2)
134
+
135
+ | Metric |Value|
136
+ |-------------------|----:|
137
+ |Avg. |17.57|
138
+ |IFEval (0-Shot) |20.46|
139
+ |BBH (3-Shot) |31.65|
140
+ |MATH Lvl 5 (4-Shot)| 8.61|
141
+ |GPQA (0-shot) | 8.50|
142
+ |MuSR (0-shot) |11.43|
143
+ |MMLU-PRO (5-shot) |24.76|
144
+