mkurman commited on
Commit
66ea241
1 Parent(s): 8f6ae57

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +109 -1
README.md CHANGED
@@ -17,6 +17,101 @@ datasets:
17
  - allenai/tulu-3-sft-mixture
18
  - allenai/llama-3.1-tulu-3-8b-preference-mixture
19
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ---
21
 
22
  # MedIT SUN 1B Instruct
@@ -57,4 +152,17 @@ pipeline_tag: text-generation
57
  As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
58
 
59
  **Disclaimer and Safety Considerations**
60
- The Model is designed to be used as a smart assistant but not as a knowledge source within your applications, systems, or environments. It is not intended to provide 100% accurate answers, especially in scenarios where high precision and accuracy are
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  - allenai/tulu-3-sft-mixture
18
  - allenai/llama-3.1-tulu-3-8b-preference-mixture
19
  pipeline_tag: text-generation
20
+ model-index:
21
+ - name: Llama-3.2-SUN-1B-Instruct
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: IFEval (0-Shot)
28
+ type: HuggingFaceH4/ifeval
29
+ args:
30
+ num_few_shot: 0
31
+ metrics:
32
+ - type: inst_level_strict_acc and prompt_level_strict_acc
33
+ value: 64.13
34
+ name: strict accuracy
35
+ source:
36
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-1B-Instruct
37
+ name: Open LLM Leaderboard
38
+ - task:
39
+ type: text-generation
40
+ name: Text Generation
41
+ dataset:
42
+ name: BBH (3-Shot)
43
+ type: BBH
44
+ args:
45
+ num_few_shot: 3
46
+ metrics:
47
+ - type: acc_norm
48
+ value: 9.18
49
+ name: normalized accuracy
50
+ source:
51
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-1B-Instruct
52
+ name: Open LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: MATH Lvl 5 (4-Shot)
58
+ type: hendrycks/competition_math
59
+ args:
60
+ num_few_shot: 4
61
+ metrics:
62
+ - type: exact_match
63
+ value: 4.61
64
+ name: exact match
65
+ source:
66
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-1B-Instruct
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: GPQA (0-shot)
73
+ type: Idavidrein/gpqa
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: acc_norm
78
+ value: 0.0
79
+ name: acc_norm
80
+ source:
81
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-1B-Instruct
82
+ name: Open LLM Leaderboard
83
+ - task:
84
+ type: text-generation
85
+ name: Text Generation
86
+ dataset:
87
+ name: MuSR (0-shot)
88
+ type: TAUR-Lab/MuSR
89
+ args:
90
+ num_few_shot: 0
91
+ metrics:
92
+ - type: acc_norm
93
+ value: 4.05
94
+ name: acc_norm
95
+ source:
96
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-1B-Instruct
97
+ name: Open LLM Leaderboard
98
+ - task:
99
+ type: text-generation
100
+ name: Text Generation
101
+ dataset:
102
+ name: MMLU-PRO (5-shot)
103
+ type: TIGER-Lab/MMLU-Pro
104
+ config: main
105
+ split: test
106
+ args:
107
+ num_few_shot: 5
108
+ metrics:
109
+ - type: acc
110
+ value: 8.68
111
+ name: accuracy
112
+ source:
113
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-1B-Instruct
114
+ name: Open LLM Leaderboard
115
  ---
116
 
117
  # MedIT SUN 1B Instruct
 
152
  As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
153
 
154
  **Disclaimer and Safety Considerations**
155
+ The Model is designed to be used as a smart assistant but not as a knowledge source within your applications, systems, or environments. It is not intended to provide 100% accurate answers, especially in scenarios where high precision and accuracy are
156
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
157
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meditsolutions__Llama-3.2-SUN-1B-Instruct)
158
+
159
+ | Metric |Value|
160
+ |-------------------|----:|
161
+ |Avg. |15.11|
162
+ |IFEval (0-Shot) |64.13|
163
+ |BBH (3-Shot) | 9.18|
164
+ |MATH Lvl 5 (4-Shot)| 4.61|
165
+ |GPQA (0-shot) | 0.00|
166
+ |MuSR (0-shot) | 4.05|
167
+ |MMLU-PRO (5-shot) | 8.68|
168
+