DavidAU commited on
Commit
58b22c0
1 Parent(s): 489242b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ - Mistral
9
+ - instruct
10
+ - finetune
11
+ - chatml
12
+ - gpt4
13
+ - synthetic data
14
+ - science
15
+ - physics
16
+ - chemistry
17
+ - biology
18
+ - math
19
+ - llama-cpp
20
+ - gguf-my-repo
21
+ base_model: alpindale/Mistral-7B-v0.2-hf
22
+ datasets:
23
+ - allenai/ai2_arc
24
+ - camel-ai/physics
25
+ - camel-ai/chemistry
26
+ - camel-ai/biology
27
+ - camel-ai/math
28
+ - metaeval/reclor
29
+ - openbookqa
30
+ - mandyyyyii/scibench
31
+ - derek-thomas/ScienceQA
32
+ - TIGER-Lab/ScienceEval
33
+ - jondurbin/airoboros-3.2
34
+ - LDJnr/Capybara
35
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
36
+ - STEM-AI-mtl/Electrical-engineering
37
+ - knowrohit07/saraswati-stem
38
+ - sablo/oasst2_curated
39
+ - lmsys/lmsys-chat-1m
40
+ - TIGER-Lab/MathInstruct
41
+ - bigbio/med_qa
42
+ - meta-math/MetaMathQA-40K
43
+ - openbookqa
44
+ - piqa
45
+ - metaeval/reclor
46
+ - derek-thomas/ScienceQA
47
+ - scibench
48
+ - sciq
49
+ - Open-Orca/SlimOrca
50
+ - migtissera/Synthia-v1.3
51
+ - TIGER-Lab/ScienceEval
52
+ - allenai/WildChat
53
+ - microsoft/orca-math-word-problems-200k
54
+ - openchat/openchat_sharegpt4_dataset
55
+ - teknium/GPTeacher-General-Instruct
56
+ - m-a-p/CodeFeedback-Filtered-Instruction
57
+ - totally-not-an-llm/EverythingLM-data-V3
58
+ - HuggingFaceH4/no_robots
59
+ - OpenAssistant/oasst_top1_2023-08-25
60
+ - WizardLM/WizardLM_evol_instruct_70k
61
+ model-index:
62
+ - name: Einstein-v6-7B
63
+ results:
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: AI2 Reasoning Challenge (25-Shot)
69
+ type: ai2_arc
70
+ config: ARC-Challenge
71
+ split: test
72
+ args:
73
+ num_few_shot: 25
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 63.57
77
+ name: normalized accuracy
78
+ source:
79
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: HellaSwag (10-Shot)
86
+ type: hellaswag
87
+ split: validation
88
+ args:
89
+ num_few_shot: 10
90
+ metrics:
91
+ - type: acc_norm
92
+ value: 82.76
93
+ name: normalized accuracy
94
+ source:
95
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B
96
+ name: Open LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: MMLU (5-Shot)
102
+ type: cais/mmlu
103
+ config: all
104
+ split: test
105
+ args:
106
+ num_few_shot: 5
107
+ metrics:
108
+ - type: acc
109
+ value: 62.23
110
+ name: accuracy
111
+ source:
112
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B
113
+ name: Open LLM Leaderboard
114
+ - task:
115
+ type: text-generation
116
+ name: Text Generation
117
+ dataset:
118
+ name: TruthfulQA (0-shot)
119
+ type: truthful_qa
120
+ config: multiple_choice
121
+ split: validation
122
+ args:
123
+ num_few_shot: 0
124
+ metrics:
125
+ - type: mc2
126
+ value: 52.02
127
+ source:
128
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B
129
+ name: Open LLM Leaderboard
130
+ - task:
131
+ type: text-generation
132
+ name: Text Generation
133
+ dataset:
134
+ name: Winogrande (5-shot)
135
+ type: winogrande
136
+ config: winogrande_xl
137
+ split: validation
138
+ args:
139
+ num_few_shot: 5
140
+ metrics:
141
+ - type: acc
142
+ value: 78.61
143
+ name: accuracy
144
+ source:
145
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B
146
+ name: Open LLM Leaderboard
147
+ - task:
148
+ type: text-generation
149
+ name: Text Generation
150
+ dataset:
151
+ name: GSM8k (5-shot)
152
+ type: gsm8k
153
+ config: main
154
+ split: test
155
+ args:
156
+ num_few_shot: 5
157
+ metrics:
158
+ - type: acc
159
+ value: 63.53
160
+ name: accuracy
161
+ source:
162
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v6-7B
163
+ name: Open LLM Leaderboard
164
+ ---
165
+
166
+ # DavidAU/Einstein-v6-7B-Q4_K_M-GGUF
167
+ This model was converted to GGUF format from [`Weyaxi/Einstein-v6-7B`](https://huggingface.co/Weyaxi/Einstein-v6-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
168
+ Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v6-7B) for more details on the model.
169
+ ## Use with llama.cpp
170
+
171
+ Install llama.cpp through brew.
172
+
173
+ ```bash
174
+ brew install ggerganov/ggerganov/llama.cpp
175
+ ```
176
+ Invoke the llama.cpp server or the CLI.
177
+
178
+ CLI:
179
+
180
+ ```bash
181
+ llama-cli --hf-repo DavidAU/Einstein-v6-7B-Q4_K_M-GGUF --model einstein-v6-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
182
+ ```
183
+
184
+ Server:
185
+
186
+ ```bash
187
+ llama-server --hf-repo DavidAU/Einstein-v6-7B-Q4_K_M-GGUF --model einstein-v6-7b.Q4_K_M.gguf -c 2048
188
+ ```
189
+
190
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
191
+
192
+ ```
193
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v6-7b.Q4_K_M.gguf -n 128
194
+ ```