TheBloke commited on
Commit
1a22f0c
1 Parent(s): d456330

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +221 -0
README.md CHANGED
@@ -1,3 +1,224 @@
1
  ---
2
  license: other
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ inference: false
4
  ---
5
+ # StableVicuna-13B-GGML
6
+
7
+ This is an HF format unquantised model of [CarterAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
8
+
9
+ It is the result of merging the deltas from the above repository with the original Llama 13B weights, and then quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
10
+
11
+ ## Repositories available
12
+
13
+ * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
14
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
15
+ * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
16
+
17
+ ## Provided files
18
+ | Name | Quant method | Bits | Size | RAM required | Use case |
19
+ | ---- | ---- | ---- | ---- | ---- | ----- |
20
+ `stable-vicuna-13B.ggml.q4_0.bin` | q4_0 | 4bit | 4.2GB | 6GB | Maximum compatibility |
21
+ `stable-vicuna-13B.ggml.q4_2.bin` | q4_2 | 4bit | 4.2GB | 6GB | Best compromise between resources, speed and quality |
22
+ `stable-vicuna-13B.ggml.q4_3.bin` | q4_3 | 4bit | 5.0GB | 7GB | Maximum quality 4bit, higher RAM requirements and slower inference |
23
+ `stable-vicuna-13B.ggml.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
24
+ `stable-vicuna-13B.ggml.q5_1.bin` | q5_1 | 5bit | 5.0GB | 7GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
25
+
26
+ * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
27
+ * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
28
+ * The q4_3 file offers the highest quality, at the cost of increased RAM usage and slower inference speed. This format is still subject to change and there may be compatibility issues, see below.
29
+ * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
30
+ * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
31
+
32
+ ## q4_2 and q4_3 compatibility
33
+
34
+ q4_2 and q4_3 are new 4bit quantisation methods offering improved quality. However they are still under development and their formats are subject to change.
35
+
36
+ In order to use these files you will need to use recent llama.cpp code. And it's possible that future updates to llama.cpp could require that these files are re-generated.
37
+
38
+ If and when the q4_2 and q4_3 files no longer work with recent versions of llama.cpp I will endeavour to update them.
39
+
40
+ If you want to ensure guaranteed compatibility with a wide range of llama.cpp versions, use the q4_0 file.
41
+
42
+ ## q5_0 and q5_1 compatibility
43
+
44
+ These new methods were released to llama.cpp on 26th April. You will need to pull the latest llama.cpp code and rebuild to be able to use them.
45
+
46
+ Don't expect any third-party UIs/tools to support them yet.
47
+
48
+ ## How to run in `llama.cpp`
49
+
50
+ I use the following command line; adjust for your tastes and needs:
51
+
52
+ ```
53
+ ./main -t 18 -m stable-vicuna-13B.ggml.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: Write a story about llamas
54
+ ### Assistant:"
55
+ ```
56
+ Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
57
+
58
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
59
+
60
+ ## How to run in `text-generation-webui`
61
+
62
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
63
+
64
+ Note: at this time text-generation-webui will not support the new q5 quantisation methods.
65
+
66
+ **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
67
+
68
+ # Original StableVicuna-13B model card
69
+
70
+ ## Model Description
71
+
72
+ StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
73
+
74
+ ## Model Details
75
+
76
+ * **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
77
+ * **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
78
+ * **Language(s)**: English
79
+ * **Library**: [trlX](https://github.com/CarperAI/trlx)
80
+ * **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
81
+ * *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
82
+ * **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
83
+
84
+ | Hyperparameter | Value |
85
+ |---------------------------|-------|
86
+ | \\(n_\text{parameters}\\) | 13B |
87
+ | \\(d_\text{model}\\) | 5120 |
88
+ | \\(n_\text{layers}\\) | 40 |
89
+ | \\(n_\text{heads}\\) | 40 |
90
+
91
+ ## Training
92
+
93
+ ### Training Dataset
94
+
95
+ StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
96
+ [GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
97
+
98
+ The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
99
+
100
+ ### Training Procedure
101
+
102
+ `CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
103
+
104
+ | Hyperparameter | Value |
105
+ |-------------------|---------|
106
+ | num_rollouts | 128 |
107
+ | chunk_size | 16 |
108
+ | ppo_epochs | 4 |
109
+ | init_kl_coef | 0.1 |
110
+ | target | 6 |
111
+ | horizon | 10000 |
112
+ | gamma | 1 |
113
+ | lam | 0.95 |
114
+ | cliprange | 0.2 |
115
+ | cliprange_value | 0.2 |
116
+ | vf_coef | 1.0 |
117
+ | scale_reward | None |
118
+ | cliprange_reward | 10 |
119
+ | generation_kwargs | |
120
+ | max_length | 512 |
121
+ | min_length | 48 |
122
+ | top_k | 0.0 |
123
+ | top_p | 1.0 |
124
+ | do_sample | True |
125
+ | temperature | 1.0 |
126
+
127
+ ## Use and Limitations
128
+
129
+ ### Intended Use
130
+
131
+ This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
132
+
133
+ ### Limitations and bias
134
+
135
+ The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
136
+
137
+ ## Acknowledgements
138
+
139
+ This work would not have been possible without the support of [Stability AI](https://stability.ai/).
140
+
141
+ ## Citations
142
+
143
+ ```bibtex
144
+ @article{touvron2023llama,
145
+ title={LLaMA: Open and Efficient Foundation Language Models},
146
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
147
+ journal={arXiv preprint arXiv:2302.13971},
148
+ year={2023}
149
+ }
150
+ ```
151
+
152
+ ```bibtex
153
+ @misc{vicuna2023,
154
+ title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
155
+ url = {https://vicuna.lmsys.org},
156
+ author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
157
+ month = {March},
158
+ year = {2023}
159
+ }
160
+ ```
161
+
162
+ ```bibtex
163
+ @misc{gpt4all,
164
+ author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
165
+ title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
166
+ year = {2023},
167
+ publisher = {GitHub},
168
+ journal = {GitHub repository},
169
+ howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
170
+ }
171
+ ```
172
+
173
+ ```bibtex
174
+ @misc{alpaca,
175
+ author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
176
+ title = {Stanford Alpaca: An Instruction-following LLaMA model},
177
+ year = {2023},
178
+ publisher = {GitHub},
179
+ journal = {GitHub repository},
180
+ howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
181
+ }
182
+ ```
183
+
184
+ ```bibtex
185
+ @software{leandro_von_werra_2023_7790115,
186
+ author = {Leandro von Werra and
187
+ Alex Havrilla and
188
+ Max reciprocated and
189
+ Jonathan Tow and
190
+ Aman cat-state and
191
+ Duy V. Phung and
192
+ Louis Castricato and
193
+ Shahbuland Matiana and
194
+ Alan and
195
+ Ayush Thakur and
196
+ Alexey Bukhtiyarov and
197
+ aaronrmm and
198
+ Fabrizio Milo and
199
+ Daniel and
200
+ Daniel King and
201
+ Dong Shin and
202
+ Ethan Kim and
203
+ Justin Wei and
204
+ Manuel Romero and
205
+ Nicky Pochinkov and
206
+ Omar Sanseviero and
207
+ Reshinth Adithyan and
208
+ Sherman Siu and
209
+ Thomas Simonini and
210
+ Vladimir Blagojevic and
211
+ Xu Song and
212
+ Zack Witten and
213
+ alexandremuzio and
214
+ crumb},
215
+ title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
216
+ Util, T5 ILQL, Tests}},
217
+ month = mar,
218
+ year = 2023,
219
+ publisher = {Zenodo},
220
+ version = {v0.6.0},
221
+ doi = {10.5281/zenodo.7790115},
222
+ url = {https://doi.org/10.5281/zenodo.7790115}
223
+ }
224
+ ```