Commit
a0b94e7
1 Parent(s): 64288dd

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (2f88ff6dca6bf9aa875f353c1044f2172613ed0c)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +114 -6
README.md CHANGED
@@ -1,5 +1,8 @@
1
  ---
2
- base_model:
 
 
 
3
  - nbeerbower/llama-3-stella-8B
4
  - defog/llama-3-sqlcoder-8b
5
  - nbeerbower/llama-3-gutenberg-8B
@@ -9,10 +12,7 @@ base_model:
9
  - mlabonne/ChimeraLlama-3-8B-v3
10
  - flammenai/Mahou-1.1-llama3-8B
11
  - KingNish/KingNish-Llama3-8b
12
- tags:
13
- - merge
14
- - mergekit
15
- - lazymergekit
16
  - nbeerbower/llama-3-stella-8B
17
  - defog/llama-3-sqlcoder-8b
18
  - nbeerbower/llama-3-gutenberg-8B
@@ -22,6 +22,101 @@ tags:
22
  - mlabonne/ChimeraLlama-3-8B-v3
23
  - flammenai/Mahou-1.1-llama3-8B
24
  - KingNish/KingNish-Llama3-8b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  # llama-3-luminous-merged
@@ -107,4 +202,17 @@ pipeline = transformers.pipeline(
107
 
108
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
109
  print(outputs[0]["generated_text"])
110
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
  - nbeerbower/llama-3-stella-8B
7
  - defog/llama-3-sqlcoder-8b
8
  - nbeerbower/llama-3-gutenberg-8B
 
12
  - mlabonne/ChimeraLlama-3-8B-v3
13
  - flammenai/Mahou-1.1-llama3-8B
14
  - KingNish/KingNish-Llama3-8b
15
+ base_model:
 
 
 
16
  - nbeerbower/llama-3-stella-8B
17
  - defog/llama-3-sqlcoder-8b
18
  - nbeerbower/llama-3-gutenberg-8B
 
22
  - mlabonne/ChimeraLlama-3-8B-v3
23
  - flammenai/Mahou-1.1-llama3-8B
24
  - KingNish/KingNish-Llama3-8b
25
+ model-index:
26
+ - name: llama-3-luminous-merged
27
+ results:
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: IFEval (0-Shot)
33
+ type: HuggingFaceH4/ifeval
34
+ args:
35
+ num_few_shot: 0
36
+ metrics:
37
+ - type: inst_level_strict_acc and prompt_level_strict_acc
38
+ value: 43.23
39
+ name: strict accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: BBH (3-Shot)
48
+ type: BBH
49
+ args:
50
+ num_few_shot: 3
51
+ metrics:
52
+ - type: acc_norm
53
+ value: 30.64
54
+ name: normalized accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: MATH Lvl 5 (4-Shot)
63
+ type: hendrycks/competition_math
64
+ args:
65
+ num_few_shot: 4
66
+ metrics:
67
+ - type: exact_match
68
+ value: 7.85
69
+ name: exact match
70
+ source:
71
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: GPQA (0-shot)
78
+ type: Idavidrein/gpqa
79
+ args:
80
+ num_few_shot: 0
81
+ metrics:
82
+ - type: acc_norm
83
+ value: 5.7
84
+ name: acc_norm
85
+ source:
86
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: MuSR (0-shot)
93
+ type: TAUR-Lab/MuSR
94
+ args:
95
+ num_few_shot: 0
96
+ metrics:
97
+ - type: acc_norm
98
+ value: 10.63
99
+ name: acc_norm
100
+ source:
101
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: MMLU-PRO (5-shot)
108
+ type: TIGER-Lab/MMLU-Pro
109
+ config: main
110
+ split: test
111
+ args:
112
+ num_few_shot: 5
113
+ metrics:
114
+ - type: acc
115
+ value: 30.81
116
+ name: accuracy
117
+ source:
118
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
119
+ name: Open LLM Leaderboard
120
  ---
121
 
122
  # llama-3-luminous-merged
 
202
 
203
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
204
  print(outputs[0]["generated_text"])
205
+ ```
206
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
207
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlackBeenie__llama-3-luminous-merged)
208
+
209
+ | Metric |Value|
210
+ |-------------------|----:|
211
+ |Avg. |21.48|
212
+ |IFEval (0-Shot) |43.23|
213
+ |BBH (3-Shot) |30.64|
214
+ |MATH Lvl 5 (4-Shot)| 7.85|
215
+ |GPQA (0-shot) | 5.70|
216
+ |MuSR (0-shot) |10.63|
217
+ |MMLU-PRO (5-shot) |30.81|
218
+