Transformers
GGUF
TensorBlock
GGUF
Eval Results
Inference Endpoints
conversational
morriszms commited on
Commit
768a92f
1 Parent(s): bc71c54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -221,8 +221,16 @@ This repo contains GGUF format model files for [vicgalle/ConfigurableBeagle-11B]
221
 
222
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
223
 
 
 
 
 
 
 
 
224
  ## Prompt template
225
 
 
226
  ```
227
  ### System:
228
  {system_prompt}
@@ -237,18 +245,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
237
 
238
  | Filename | Quant type | File Size | Description |
239
  | -------- | ---------- | --------- | ----------- |
240
- | [ConfigurableBeagle-11B-Q2_K.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q2_K.gguf) | Q2_K | 3.728 GB | smallest, significant quality loss - not recommended for most purposes |
241
- | [ConfigurableBeagle-11B-Q3_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q3_K_S.gguf) | Q3_K_S | 4.344 GB | very small, high quality loss |
242
- | [ConfigurableBeagle-11B-Q3_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q3_K_M.gguf) | Q3_K_M | 4.839 GB | very small, high quality loss |
243
- | [ConfigurableBeagle-11B-Q3_K_L.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q3_K_L.gguf) | Q3_K_L | 5.263 GB | small, substantial quality loss |
244
- | [ConfigurableBeagle-11B-Q4_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q4_0.gguf) | Q4_0 | 5.655 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
245
- | [ConfigurableBeagle-11B-Q4_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q4_K_S.gguf) | Q4_K_S | 5.698 GB | small, greater quality loss |
246
- | [ConfigurableBeagle-11B-Q4_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q4_K_M.gguf) | Q4_K_M | 6.018 GB | medium, balanced quality - recommended |
247
- | [ConfigurableBeagle-11B-Q5_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q5_0.gguf) | Q5_0 | 6.889 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
248
- | [ConfigurableBeagle-11B-Q5_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q5_K_S.gguf) | Q5_K_S | 6.889 GB | large, low quality loss - recommended |
249
- | [ConfigurableBeagle-11B-Q5_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q5_K_M.gguf) | Q5_K_M | 7.076 GB | large, very low quality loss - recommended |
250
- | [ConfigurableBeagle-11B-Q6_K.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q6_K.gguf) | Q6_K | 8.200 GB | very large, extremely low quality loss |
251
- | [ConfigurableBeagle-11B-Q8_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/tree/main/ConfigurableBeagle-11B-Q8_0.gguf) | Q8_0 | 10.621 GB | very large, extremely low quality loss - not recommended |
252
 
253
 
254
  ## Downloading instruction
 
221
 
222
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
223
 
224
+
225
+ <div style="text-align: left; margin: 20px 0;">
226
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
227
+ Run them on the TensorBlock client using your local machine ↗
228
+ </a>
229
+ </div>
230
+
231
  ## Prompt template
232
 
233
+
234
  ```
235
  ### System:
236
  {system_prompt}
 
245
 
246
  | Filename | Quant type | File Size | Description |
247
  | -------- | ---------- | --------- | ----------- |
248
+ | [ConfigurableBeagle-11B-Q2_K.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q2_K.gguf) | Q2_K | 3.728 GB | smallest, significant quality loss - not recommended for most purposes |
249
+ | [ConfigurableBeagle-11B-Q3_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q3_K_S.gguf) | Q3_K_S | 4.344 GB | very small, high quality loss |
250
+ | [ConfigurableBeagle-11B-Q3_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q3_K_M.gguf) | Q3_K_M | 4.839 GB | very small, high quality loss |
251
+ | [ConfigurableBeagle-11B-Q3_K_L.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q3_K_L.gguf) | Q3_K_L | 5.263 GB | small, substantial quality loss |
252
+ | [ConfigurableBeagle-11B-Q4_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q4_0.gguf) | Q4_0 | 5.655 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
253
+ | [ConfigurableBeagle-11B-Q4_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q4_K_S.gguf) | Q4_K_S | 5.698 GB | small, greater quality loss |
254
+ | [ConfigurableBeagle-11B-Q4_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q4_K_M.gguf) | Q4_K_M | 6.018 GB | medium, balanced quality - recommended |
255
+ | [ConfigurableBeagle-11B-Q5_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q5_0.gguf) | Q5_0 | 6.889 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
256
+ | [ConfigurableBeagle-11B-Q5_K_S.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q5_K_S.gguf) | Q5_K_S | 6.889 GB | large, low quality loss - recommended |
257
+ | [ConfigurableBeagle-11B-Q5_K_M.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q5_K_M.gguf) | Q5_K_M | 7.076 GB | large, very low quality loss - recommended |
258
+ | [ConfigurableBeagle-11B-Q6_K.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q6_K.gguf) | Q6_K | 8.200 GB | very large, extremely low quality loss |
259
+ | [ConfigurableBeagle-11B-Q8_0.gguf](https://huggingface.co/tensorblock/ConfigurableBeagle-11B-GGUF/blob/main/ConfigurableBeagle-11B-Q8_0.gguf) | Q8_0 | 10.621 GB | very large, extremely low quality loss - not recommended |
260
 
261
 
262
  ## Downloading instruction