zephyr-7b-beta-GGUF / README.md
morriszms's picture
Update README.md
e2655fb verified
---
tags:
- generated_from_trainer
- TensorBlock
- GGUF
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
base_model: HuggingFaceH4/zephyr-7b-beta
widget:
- example_title: Pirate!
messages:
- role: system
content: You are a pirate chatbot who always responds with Arr!
- role: user
content: There's a llama on my lawn, how can I get rid of him?
output:
text: Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight,
but I've got a plan that might help ye get rid of 'im. Ye'll need to gather
some carrots and hay, and then lure the llama away with the promise of a tasty
treat. Once he's gone, ye can clean up yer lawn and enjoy the peace and quiet
once again. But beware, me hearty, for there may be more llamas where that one
came from! Arr!
pipeline_tag: text-generation
model-index:
- name: zephyr-7b-beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.03071672354948
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.35570603465445
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Drop (3-Shot)
type: drop
split: validation
args:
num_few_shot: 3
metrics:
- type: f1
value: 9.66243708053691
name: f1 score
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.44916942762855
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 12.736921910538287
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.7426992896606
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HuggingFaceH4/zephyr-7b-beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: AlpacaEval
type: tatsu-lab/alpaca_eval
metrics:
- type: unknown
value: 0.906
name: win rate
source:
url: https://tatsu-lab.github.io/alpaca_eval/
- task:
type: text-generation
name: Text Generation
dataset:
name: MT-Bench
type: unknown
metrics:
- type: unknown
value: 7.34
name: score
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## HuggingFaceH4/zephyr-7b-beta - GGUF
This repo contains GGUF format model files for [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-7b-beta-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-7b-beta-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [zephyr-7b-beta-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [zephyr-7b-beta-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [zephyr-7b-beta-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-7b-beta-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [zephyr-7b-beta-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [zephyr-7b-beta-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-7b-beta-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
| [zephyr-7b-beta-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [zephyr-7b-beta-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [zephyr-7b-beta-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-7b-beta-GGUF/blob/main/zephyr-7b-beta-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-7b-beta-GGUF --include "zephyr-7b-beta-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-7b-beta-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```