piimaila commited on
Commit
294160a
1 Parent(s): e74d0a0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +189 -0
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: HuggingFaceH4/zephyr-7b-beta
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - HuggingFaceH4/ultrafeedback_binarized
6
+ language:
7
+ - en
8
+ license: mit
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - generated_from_trainer
12
+ - openvino
13
+ - openvino-export
14
+ widget:
15
+ - example_title: Pirate!
16
+ messages:
17
+ - role: system
18
+ content: You are a pirate chatbot who always responds with Arr!
19
+ - role: user
20
+ content: There's a llama on my lawn, how can I get rid of him?
21
+ output:
22
+ text: Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight,
23
+ but I've got a plan that might help ye get rid of 'im. Ye'll need to gather
24
+ some carrots and hay, and then lure the llama away with the promise of a tasty
25
+ treat. Once he's gone, ye can clean up yer lawn and enjoy the peace and quiet
26
+ once again. But beware, me hearty, for there may be more llamas where that one
27
+ came from! Arr!
28
+ model-index:
29
+ - name: zephyr-7b-beta
30
+ results:
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: AI2 Reasoning Challenge (25-Shot)
36
+ type: ai2_arc
37
+ config: ARC-Challenge
38
+ split: test
39
+ args:
40
+ num_few_shot: 25
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 62.03071672354948
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: HellaSwag (10-Shot)
53
+ type: hellaswag
54
+ split: validation
55
+ args:
56
+ num_few_shot: 10
57
+ metrics:
58
+ - type: acc_norm
59
+ value: 84.35570603465445
60
+ name: normalized accuracy
61
+ source:
62
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: Drop (3-Shot)
69
+ type: drop
70
+ split: validation
71
+ args:
72
+ num_few_shot: 3
73
+ metrics:
74
+ - type: f1
75
+ value: 9.66243708053691
76
+ name: f1 score
77
+ source:
78
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: TruthfulQA (0-shot)
85
+ type: truthful_qa
86
+ config: multiple_choice
87
+ split: validation
88
+ args:
89
+ num_few_shot: 0
90
+ metrics:
91
+ - type: mc2
92
+ value: 57.44916942762855
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: GSM8k (5-shot)
101
+ type: gsm8k
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 12.736921910538287
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
112
+ name: Open LLM Leaderboard
113
+ - task:
114
+ type: text-generation
115
+ name: Text Generation
116
+ dataset:
117
+ name: MMLU (5-Shot)
118
+ type: cais/mmlu
119
+ config: all
120
+ split: test
121
+ args:
122
+ num_few_shot: 5
123
+ metrics:
124
+ - type: acc
125
+ value: 61.07
126
+ name: accuracy
127
+ source:
128
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
129
+ name: Open LLM Leaderboard
130
+ - task:
131
+ type: text-generation
132
+ name: Text Generation
133
+ dataset:
134
+ name: Winogrande (5-shot)
135
+ type: winogrande
136
+ config: winogrande_xl
137
+ split: validation
138
+ args:
139
+ num_few_shot: 5
140
+ metrics:
141
+ - type: acc
142
+ value: 77.7426992896606
143
+ name: accuracy
144
+ source:
145
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
146
+ name: Open LLM Leaderboard
147
+ - task:
148
+ type: text-generation
149
+ name: Text Generation
150
+ dataset:
151
+ name: AlpacaEval
152
+ type: tatsu-lab/alpaca_eval
153
+ metrics:
154
+ - type: unknown
155
+ value: 0.906
156
+ name: win rate
157
+ source:
158
+ url: https://tatsu-lab.github.io/alpaca_eval/
159
+ - task:
160
+ type: text-generation
161
+ name: Text Generation
162
+ dataset:
163
+ name: MT-Bench
164
+ type: unknown
165
+ metrics:
166
+ - type: unknown
167
+ value: 7.34
168
+ name: score
169
+ source:
170
+ url: https://huggingface.co/spaces/lmsys/mt-bench
171
+ ---
172
+
173
+ This model was converted to OpenVINO from [`HuggingFaceH4/zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) using [optimum-intel](https://github.com/huggingface/optimum-intel)
174
+ via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space.
175
+
176
+ First make sure you have optimum-intel installed:
177
+
178
+ ```bash
179
+ pip install optimum[openvino]
180
+ ```
181
+
182
+ To load your model you can do as follows:
183
+
184
+ ```python
185
+ from optimum.intel import OVModelForCausalLM
186
+
187
+ model_id = "piimaila/zephyr-7b-beta-openvino"
188
+ model = OVModelForCausalLM.from_pretrained(model_id)
189
+ ```