File size: 12,784 Bytes
87fe12e 3b1060b 1a22f0c 87fe12e 02c4ddc 3d1f699 02c4ddc 3d1f699 02c4ddc 3d1f699 02c4ddc 1a22f0c 0d7f4e8 1a22f0c 369aaf9 1a22f0c 5220cd8 068319e 1a22f0c 068319e 1a22f0c 068319e 1a22f0c 068319e 1a22f0c f396abd 068319e 1a22f0c 5220cd8 068319e 5220cd8 1a22f0c 4d30683 1a22f0c 5220cd8 1a22f0c 02c4ddc 1a22f0c 02c4ddc 1a22f0c 02c4ddc 3d1f699 02c4ddc 3d1f699 02c4ddc 3d1f699 02c4ddc 1a22f0c 3d1f699 1a22f0c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 |
---
language:
- en
tags:
- causal-lm
- llama
license: cc-by-nc-sa-4.0
datasets:
- OpenAssistant/oasst1
- nomic-ai/gpt4all_prompt_generations
- tatsu-lab/alpaca
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# StableVicuna-13B-GGML
This is GGML format quantised 4bit and 5bit models of [CarperAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
It is the result of merging the deltas from the above repository with the original Llama 13B weights, and then quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
* [4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
* [Unquantised float16 model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
## PROMPT TEMPLATE
This model works best with the following prompt template:
```
### Human: your prompt here
### Assistant:
```
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`stable-vicuna-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4bit. |
`stable-vicuna-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.95GB | 11.0GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`stable-vicuna-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5bit. Higher accuracy, higher resource usage and slower inference. |
`stable-vicuna-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5bit. Higher accuracy than q5_0, but again higher resource usage and slower inference. |
`stable-vicuna-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 18 -m stable-vicuna-13B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -r "### Human:" -p "### Human: write a story about llamas ### Assistant:"
```
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
## How to run in `text-generation-webui`
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original StableVicuna-13B model card
## Model Description
StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
## Model Details
* **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
* **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **Library**: [trlX](https://github.com/CarperAI/trlx)
* **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
* *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
* **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
| Hyperparameter | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 13B |
| \\(d_\text{model}\\) | 5120 |
| \\(n_\text{layers}\\) | 40 |
| \\(n_\text{heads}\\) | 40 |
## Training
### Training Dataset
StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
### Training Procedure
`CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
| Hyperparameter | Value |
|-------------------|---------|
| num_rollouts | 128 |
| chunk_size | 16 |
| ppo_epochs | 4 |
| init_kl_coef | 0.1 |
| target | 6 |
| horizon | 10000 |
| gamma | 1 |
| lam | 0.95 |
| cliprange | 0.2 |
| cliprange_value | 0.2 |
| vf_coef | 1.0 |
| scale_reward | None |
| cliprange_reward | 10 |
| generation_kwargs | |
| max_length | 512 |
| min_length | 48 |
| top_k | 0.0 |
| top_p | 1.0 |
| do_sample | True |
| temperature | 1.0 |
## Use and Limitations
### Intended Use
This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
### Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the support of [Stability AI](https://stability.ai/).
## Citations
```bibtex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```bibtex
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtex
@software{leandro_von_werra_2023_7790115,
author = {Leandro von Werra and
Alex Havrilla and
Max reciprocated and
Jonathan Tow and
Aman cat-state and
Duy V. Phung and
Louis Castricato and
Shahbuland Matiana and
Alan and
Ayush Thakur and
Alexey Bukhtiyarov and
aaronrmm and
Fabrizio Milo and
Daniel and
Daniel King and
Dong Shin and
Ethan Kim and
Justin Wei and
Manuel Romero and
Nicky Pochinkov and
Omar Sanseviero and
Reshinth Adithyan and
Sherman Siu and
Thomas Simonini and
Vladimir Blagojevic and
Xu Song and
Zack Witten and
alexandremuzio and
crumb},
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
Util, T5 ILQL, Tests}},
month = mar,
year = 2023,
publisher = {Zenodo},
version = {v0.6.0},
doi = {10.5281/zenodo.7790115},
url = {https://doi.org/10.5281/zenodo.7790115}
}
```
|