Transformers
GGUF
TensorBlock
GGUF
Eval Results
Inference Endpoints
conversational
morriszms commited on
Commit
bc71c54
1 Parent(s): cf2f529

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ ConfigurableBeagle-11B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ ConfigurableBeagle-11B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ ConfigurableBeagle-11B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ ConfigurableBeagle-11B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ ConfigurableBeagle-11B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ ConfigurableBeagle-11B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ ConfigurableBeagle-11B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ ConfigurableBeagle-11B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ ConfigurableBeagle-11B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ ConfigurableBeagle-11B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ ConfigurableBeagle-11B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ ConfigurableBeagle-11B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
ConfigurableBeagle-11B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ebe71c9e806849cbeda7e4dd1086344f6249ad78e8ced267ea8ab15b0e4e8f4
3
+ size 4003232704
ConfigurableBeagle-11B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4453b27db8d608f62631280e92ffa6f275df0ce3648cface5529d0aef42f594c
3
+ size 5650750400
ConfigurableBeagle-11B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:267d2662524a069ce1fac6f2744aee6164eeaa7dda500d13e2b4b6ea5ca8b172
3
+ size 5195668416
ConfigurableBeagle-11B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8af7032dc757a524c1ded32d52345d236a7f808e1eef48b92c32c99c10d5262
3
+ size 4664564672
ConfigurableBeagle-11B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8d32fb2e0eb49b2650cc42670673c5107359b2ca02c8b319c64085fa981a57d
3
+ size 6072384448
ConfigurableBeagle-11B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83101041efd53c5ec4a3622aeeea2c9ab1f1a65525fe630e4b53fc360953e9ad
3
+ size 6461668288
ConfigurableBeagle-11B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24ee2bac3cedd7fd1696d1a22a147ee9b9b88e2d9fb0bcacf1effd115ed9f43d
3
+ size 6118521792
ConfigurableBeagle-11B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d42f1a8575a718f2a6d22954ddcd9e92997fb6bd122d730c017ef621b1bb4a9
3
+ size 7397391296
ConfigurableBeagle-11B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:255a43b1ac87c6fb658eb27d89468d78a758cd7f92b34be158ba9bee5df04216
3
+ size 7597931456
ConfigurableBeagle-11B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7db402684731817e17975e9c9d6dbeb6d61be6dc88d47c251c3646b5a45aeb50
3
+ size 7397391296
ConfigurableBeagle-11B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:194309e8d6eef9c63a7db1d7da4d46319816be3a34d13eda8d04ff169085ea9b
3
+ size 8805211072
ConfigurableBeagle-11B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06f89b0065c07f3b2220febfa345b03430bb6897b7c06f8a024043f24108d25f
3
+ size 11404155840
README.md ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ datasets:
5
+ - vicgalle/configurable-system-prompt-multitask
6
+ tags:
7
+ - TensorBlock
8
+ - GGUF
9
+ base_model: vicgalle/ConfigurableBeagle-11B
10
+ model-index:
11
+ - name: ConfigurableBeagle-11B
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: AI2 Reasoning Challenge (25-Shot)
18
+ type: ai2_arc
19
+ config: ARC-Challenge
20
+ split: test
21
+ args:
22
+ num_few_shot: 25
23
+ metrics:
24
+ - type: acc_norm
25
+ value: 72.53
26
+ name: normalized accuracy
27
+ source:
28
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
29
+ name: Open LLM Leaderboard
30
+ - task:
31
+ type: text-generation
32
+ name: Text Generation
33
+ dataset:
34
+ name: HellaSwag (10-Shot)
35
+ type: hellaswag
36
+ split: validation
37
+ args:
38
+ num_few_shot: 10
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 88.85
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MMLU (5-Shot)
51
+ type: cais/mmlu
52
+ config: all
53
+ split: test
54
+ args:
55
+ num_few_shot: 5
56
+ metrics:
57
+ - type: acc
58
+ value: 66.71
59
+ name: accuracy
60
+ source:
61
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: TruthfulQA (0-shot)
68
+ type: truthful_qa
69
+ config: multiple_choice
70
+ split: validation
71
+ args:
72
+ num_few_shot: 0
73
+ metrics:
74
+ - type: mc2
75
+ value: 77.13
76
+ source:
77
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: Winogrande (5-shot)
84
+ type: winogrande
85
+ config: winogrande_xl
86
+ split: validation
87
+ args:
88
+ num_few_shot: 5
89
+ metrics:
90
+ - type: acc
91
+ value: 83.27
92
+ name: accuracy
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: GSM8k (5-shot)
101
+ type: gsm8k
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 63.91
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
112
+ name: Open LLM Leaderboard
113
+ - task:
114
+ type: text-generation
115
+ name: Text Generation
116
+ dataset:
117
+ name: IFEval (0-Shot)
118
+ type: HuggingFaceH4/ifeval
119
+ args:
120
+ num_few_shot: 0
121
+ metrics:
122
+ - type: inst_level_strict_acc and prompt_level_strict_acc
123
+ value: 58.34
124
+ name: strict accuracy
125
+ source:
126
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
127
+ name: Open LLM Leaderboard
128
+ - task:
129
+ type: text-generation
130
+ name: Text Generation
131
+ dataset:
132
+ name: BBH (3-Shot)
133
+ type: BBH
134
+ args:
135
+ num_few_shot: 3
136
+ metrics:
137
+ - type: acc_norm
138
+ value: 32.39
139
+ name: normalized accuracy
140
+ source:
141
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
142
+ name: Open LLM Leaderboard
143
+ - task:
144
+ type: text-generation
145
+ name: Text Generation
146
+ dataset:
147
+ name: MATH Lvl 5 (4-Shot)
148
+ type: hendrycks/competition_math
149
+ args:
150
+ num_few_shot: 4
151
+ metrics:
152
+ - type: exact_match
153
+ value: 3.7
154
+ name: exact match
155
+ source:
156
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
157
+ name: Open LLM Leaderboard
158
+ - task:
159
+ type: text-generation
160
+ name: Text Generation
161
+ dataset:
162
+ name: GPQA (0-shot)
163
+ type: Idavidrein/gpqa
164
+ args:
165
+ num_few_shot: 0
166
+ metrics:
167
+ - type: acc_norm
168
+ value: 6.94
169
+ name: acc_norm
170
+ source:
171
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
172
+ name: Open LLM Leaderboard
173
+ - task:
174
+ type: text-generation
175
+ name: Text Generation
176
+ dataset:
177
+ name: MuSR (0-shot)
178
+ type: TAUR-Lab/MuSR
179
+ args:
180
+ num_few_shot: 0
181
+ metrics:
182
+ - type: acc_norm
183
+ value: 7.38
184
+ name: acc_norm
185
+ source:
186
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
187
+ name: Open LLM Leaderboard
188
+ - task:
189
+ type: text-generation
190
+ name: Text Generation
191
+ dataset:
192
+ name: MMLU-PRO (5-shot)
193
+ type: TIGER-Lab/MMLU-Pro
194
+ config: main
195
+ split: test
196
+ args:
197
+ num_few_shot: 5
198
+ metrics:
199
+ - type: acc
200
+ value: 26.38
201
+ name: accuracy
202
+ source:
203
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B
204
+ name: Open LLM Leaderboard
205
+ ---
206
+
207
+ <div style="width: auto; margin-left: auto; margin-right: auto">
208
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
209
+ </div>
210
+ <div style="display: flex; justify-content: space-between; width: 100%;">
211
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
212
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
213
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
214
+ </p>
215
+ </div>
216
+ </div>
217
+
218
+ ## vicgalle/ConfigurableBeagle-11B - GGUF
219
+
220
+ This repo contains GGUF format model files for [vicgalle/ConfigurableBeagle-11B](https://huggingface.co/vicgalle/ConfigurableBeagle-11B).
221
+
222
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
223
+
224
+ ## Prompt template
225
+
226
+ ```
227
+ ### System:
228
+ {system_prompt}
229
+
230
+ ### User:
231
+ {prompt}
232
+
233
+ ### Assistant:
234
+ ```
235
+
236
+ ## Model file specification
237
+
238
+ | Filename | Quant type | File Size | Description |
239
+ | -------- | ---------- | --------- | ----------- |
240
+ | [ConfigurableBeagle-11B-Q2_K.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q2_K.gguf) | Q2_K | 3.728 GB | smallest, significant quality loss - not recommended for most purposes |
241
+ | [ConfigurableBeagle-11B-Q3_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q3_K_S.gguf) | Q3_K_S | 4.344 GB | very small, high quality loss |
242
+ | [ConfigurableBeagle-11B-Q3_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q3_K_M.gguf) | Q3_K_M | 4.839 GB | very small, high quality loss |
243
+ | [ConfigurableBeagle-11B-Q3_K_L.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q3_K_L.gguf) | Q3_K_L | 5.263 GB | small, substantial quality loss |
244
+ | [ConfigurableBeagle-11B-Q4_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q4_0.gguf) | Q4_0 | 5.655 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
245
+ | [ConfigurableBeagle-11B-Q4_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q4_K_S.gguf) | Q4_K_S | 5.698 GB | small, greater quality loss |
246
+ | [ConfigurableBeagle-11B-Q4_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q4_K_M.gguf) | Q4_K_M | 6.018 GB | medium, balanced quality - recommended |
247
+ | [ConfigurableBeagle-11B-Q5_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q5_0.gguf) | Q5_0 | 6.889 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
248
+ | [ConfigurableBeagle-11B-Q5_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q5_K_S.gguf) | Q5_K_S | 6.889 GB | large, low quality loss - recommended |
249
+ | [ConfigurableBeagle-11B-Q5_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q5_K_M.gguf) | Q5_K_M | 7.076 GB | large, very low quality loss - recommended |
250
+ | [ConfigurableBeagle-11B-Q6_K.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q6_K.gguf) | Q6_K | 8.200 GB | very large, extremely low quality loss |
251
+ | [ConfigurableBeagle-11B-Q8_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q8_0.gguf) | Q8_0 | 10.621 GB | very large, extremely low quality loss - not recommended |
252
+
253
+
254
+ ## Downloading instruction
255
+
256
+ ### Command line
257
+
258
+ Firstly, install Huggingface Client
259
+
260
+ ```shell
261
+ pip install -U "huggingface_hub[cli]"
262
+ ```
263
+
264
+ Then, downoad the individual model file the a local directory
265
+
266
+ ```shell
267
+ huggingface-cli download tensorblock/ConfigurableBeagle-11B-GGUF --include "ConfigurableBeagle-11B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
268
+ ```
269
+
270
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
271
+
272
+ ```shell
273
+ huggingface-cli download tensorblock/ConfigurableBeagle-11B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
274
+ ```