stabilityai/stablelm-2-zephyr-1_6b-GGUF
Quantized GGUF model files for stablelm-2-zephyr-1_6b from stabilityai
Name | Quant method | Size |
---|---|---|
stablelm-2-zephyr-1_6b.fp16.gguf | fp16 | 3.29 GB |
stablelm-2-zephyr-1_6b.q2_k.gguf | q2_k | 694.16 MB |
stablelm-2-zephyr-1_6b.q3_k_xs.gguf | q3_k_xs | 757.97 MB |
stablelm-2-zephyr-1_6b.q3_k_m.gguf | q3_k_m | 857.71 MB |
stablelm-2-zephyr-1_6b.q4_k_m.gguf | q4_k_m | 1.03 GB |
stablelm-2-zephyr-1_6b.q5_k_m.gguf | q5_k_m | 1.19 GB |
stablelm-2-zephyr-1_6b.q6_k.gguf | q6_k | 1.35 GB |
stablelm-2-zephyr-1_6b.q8_0.gguf | q8_0 | 1.75 GB |
Original Model Card:
StableLM 2 Zephyr 1.6B
Model Description
Stable LM 2 Zephyr 1.6B
is a 1.6 billion parameter instruction tuned language model inspired by HugginFaceH4's Zephyr 7B training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing Direct Preference Optimization (DPO).
Usage
StableLM 2 Zephyr 1.6B
uses the following instruction format:
<|user|>
Which famous math number begins with 1.6 ...?<|endoftext|>
<|assistant|>
The number you are referring to is 1.618033988749895. This is the famous value known as the golden ratio<|endoftext|>
This format is also available through the tokenizer's apply_chat_template
method:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-zephyr-1_6b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
'stabilityai/stablelm-2-zephyr-1_6b',
trust_remote_code=True,
device_map="auto"
)
prompt = [{'role': 'user', 'content': 'Which famous math number begins with 1.6 ...?'}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.5,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
Model Details
- Developed by: Stability AI
- Model type:
StableLM 2 Zephyr 1.6B
model is an auto-regressive language model based on the transformer decoder architecture. - Language(s): English
- Library: Alignment Handbook
- Finetuned from model: stabilityai/stablelm-2-zephyr-1_6b
- License: StabilityAI Non-Commercial Research Community License. If you want to use this model for your commercial products or purposes, please contact us here to learn more.
- Contact: For questions and comments about the model, please email
lm@stability.ai
Training Dataset
The dataset is comprised of a mixture of open datasets large-scale datasets available on the HuggingFace Hub:
- SFT Datasets
- HuggingFaceH4/ultrachat_200k
- meta-math/MetaMathQA
- WizardLM/WizardLM_evol_instruct_V2_196k
- Open-Orca/SlimOrca
- openchat/openchat_sharegpt4_dataset
- LDJnr/Capybara
- hkust-nlp/deita-10k-v0
- Preference Datasets:
- allenai/ultrafeedback_binarized_cleaned
- Intel/orca_dpo_pairs
Performance
MT-Bench
Model | Size | MT-Bench |
---|---|---|
Mistral-7B-Instruct-v0.2 | 7B | 7.61 |
Llama2-Chat | 70B | 6.86 |
stablelm-zephyr-3b | 3B | 6.64 |
MPT-30B-Chat | 30B | 6.39 |
stablelm-2-zephyr-1.6b | 1.6B | 5.42 |
Falcon-40B-Instruct | 40B | 5.17 |
Qwen-1.8B-Chat | 1.8B | 4.95 |
dolphin-2.6-phi-2 | 2.7B | 4.93 |
phi-2 | 2.7B | 4.29 |
TinyLlama-1.1B-Chat-v1.0 | 1.1B | 3.46 |
OpenLLM Leaderboard
Model | Size | Average | ARC Challenge (acc_norm) | HellaSwag (acc_norm) | MMLU (acc_norm) | TruthfulQA (mc2) | Winogrande (acc) | Gsm8k (acc) |
---|---|---|---|---|---|---|---|---|
microsoft/phi-2 | 2.7B | 61.32% | 61.09% | 75.11% | 58.11% | 44.47% | 74.35% | 54.81% |
stabilityai/stablelm-2-zephyr-1_6b | 1.6B | 49.89% | 43.69% | 69.34% | 41.85% | 45.21% | 64.09% | 35.18% |
microsoft/phi-1_5 | 1.3B | 47.69% | 52.90% | 63.79% | 43.89% | 40.89% | 72.22% | 12.43% |
stabilityai/stablelm-2-1_6b | 1.6B | 45.54% | 43.43% | 70.49% | 38.93% | 36.65% | 65.90% | 17.82% |
mosaicml/mpt-7b | 7B | 44.28% | 47.70% | 77.57% | 30.80% | 33.40% | 72.14% | 4.02% |
KnutJaegersberg/Qwen-1_8B-Llamaified* | 1.8B | 44.75% | 37.71% | 58.87% | 46.37% | 39.41% | 61.72% | 24.41% |
openlm-research/open_llama_3b_v2 | 3B | 40.28% | 40.27% | 71.60% | 27.12% | 34.78% | 67.01% | 0.91% |
iiuae/falcon-rw-1b | 1B | 37.07% | 35.07% | 63.56% | 25.28% | 35.96% | 62.04% | 0.53% |
TinyLlama/TinyLlama-1.1B-3T | 1.1B | 36.40% | 33.79% | 60.31% | 26.04% | 37.32% | 59.51% | 1.44% |
Training Infrastructure
- Hardware:
StableLM 2 Zephyr 1.6B
was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes. - Code Base: We use our internal script for SFT steps and used HuggingFace Alignment Handbook script for DPO training.
Use and Limitations
Intended Use
The model is intended to be used in chat-like applications. Developers must evaluate the model for safety performance in their specific use case. Read more about safety and limitations below.
Limitations and Bias
​ This model is not trained against adversarial inputs. We strongly recommend pairing this model with an input and output classifier to prevent harmful responses.
Through our internal red teaming, we discovered that while the model will not output harmful information if not prompted to do so, it will hallucinate many facts. It is also willing to output potentially harmful outputs or misinformation when the user requests it. Using this model will require guardrails around your inputs and outputs to ensure that any outputs returned are not misinformation or harmful. Additionally, as each use case is unique, we recommend running your own suite of tests to ensure proper performance of this model. Finally, do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
How to Cite
@misc{StableLM-2-1.6B,
url={[https://huggingface.co/stabilityai/stablelm-2-1.6b](https://huggingface.co/stabilityai/stablelm-2-1.6b)},
title={Stable LM 2 1.6B},
author={Stability AI Language Team}
}
- Downloads last month
- 64
Model tree for afrideva/stablelm-2-zephyr-1_6b-GGUF
Base model
stabilityai/stablelm-2-zephyr-1_6b