GGUF
Composer
MosaicML
llm-foundry
maddes8cht commited on
Commit
a0960b0
1 Parent(s): 50f95aa

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +277 -0
README.md ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - camel-ai/code
5
+ - ehartford/wizard_vicuna_70k_unfiltered
6
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
7
+ - teknium1/GPTeacher/roleplay-instruct-v2-final
8
+ - teknium1/GPTeacher/codegen-isntruct
9
+ - timdettmers/openassistant-guanaco
10
+ - camel-ai/math
11
+ - project-baize/baize-chatbot/medical_chat_data
12
+ - project-baize/baize-chatbot/quora_chat_data
13
+ - project-baize/baize-chatbot/stackoverflow_chat_data
14
+ - camel-ai/biology
15
+ - camel-ai/chemistry
16
+ - camel-ai/ai_society
17
+ - jondurbin/airoboros-gpt4-1.2
18
+ - LongConversations
19
+ - camel-ai/physics
20
+ tags:
21
+ - Composer
22
+ - MosaicML
23
+ - llm-foundry
24
+ inference: false
25
+ ---
26
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
27
+
28
+ I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
29
+
30
+ # mpt-7b-8k-chat - GGUF
31
+ - Model creator: [mosaicml](https://huggingface.co/mosaicml)
32
+ - Original model: [mpt-7b-8k-chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat)
33
+
34
+ # Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
35
+
36
+ As noted on the [Llama.cpp GitHub repository](https://github.com/ggerganov/llama.cpp#hot-topics), all new Llama.cpp releases after October 18, 2023, will require a re-quantization due to the new BPE tokenizer.
37
+
38
+ **Good news!** I am glad that my re-quantization process for Falcon Models is nearly complete. Download the latest quantized models to ensure compatibility with recent llama.cpp software.
39
+
40
+ **Key Points:**
41
+
42
+ - **Stay Informed:** Keep an eye on software application release schedules using llama.cpp libraries.
43
+ - **Monitor Upload Times:** Re-quantization is *almost* done. Watch for updates on my Hugging Face Model pages.
44
+
45
+ **Important Compatibility Note:** Old software will work with old Falcon models, but expect updated software to exclusively support the new models.
46
+
47
+ This change primarily affects **Falcon** and **Starcoder** models, with other models remaining unaffected.
48
+
49
+
50
+
51
+
52
+ # About GGUF format
53
+
54
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
55
+ A growing list of Software is using it and can therefore use this model.
56
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
57
+
58
+ # Quantization variants
59
+
60
+ There is a bunch of quantized files available. How to choose the best for you:
61
+
62
+ # Legacy quants
63
+
64
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
65
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
66
+ Falcon 7B models cannot be quantized to K-quants.
67
+
68
+ # K-quants
69
+
70
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
71
+ So, if possible, use K-quants.
72
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
73
+
74
+
75
+
76
+
77
+ ---
78
+
79
+ # Original Model Card:
80
+ # MPT-7B-Chat-8k
81
+
82
+ MPT-7B-Chat-8k is a chatbot-like model for dialogue generation.
83
+ It was built by finetuning [MPT-7B-8k](https://huggingface.co/mosaicml/mpt-7b-8k) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
84
+ [GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
85
+ This is the same dataset that [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat) was trained on.
86
+ * License: _CC-By-NC-SA-4.0_ (non-commercial use only)
87
+
88
+ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
89
+
90
+ ## Model Date
91
+
92
+ July 18, 2023
93
+
94
+ ## Model License
95
+
96
+ _CC-By-NC-SA-4.0_ (non-commercial use only)
97
+
98
+ ## Documentation
99
+
100
+ * [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
101
+ * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
102
+ * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
103
+
104
+ ## How to Use
105
+
106
+ This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
107
+
108
+ ```python
109
+ import transformers
110
+ model = transformers.AutoModelForCausalLM.from_pretrained(
111
+ 'mosaicml/mpt-7b-chat-8k',
112
+ trust_remote_code=True
113
+ )
114
+ ```
115
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
116
+ This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
117
+ `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
118
+
119
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
120
+ ```python
121
+ import torch
122
+ import transformers
123
+
124
+ name = 'mosaicml/mpt-7b-chat-8k'
125
+
126
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
127
+ config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
128
+ config.init_device = 'cuda:0' # For fast initialization directly on GPU!
129
+
130
+ model = transformers.AutoModelForCausalLM.from_pretrained(
131
+ name,
132
+ config=config,
133
+ torch_dtype=torch.bfloat16, # Load model weights in bfloat16
134
+ trust_remote_code=True
135
+ )
136
+ ```
137
+
138
+ The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
139
+
140
+ ```python
141
+ import transformers
142
+
143
+ name = 'mosaicml/mpt-7b-chat-8k'
144
+
145
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
146
+ config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
147
+
148
+ model = transformers.AutoModelForCausalLM.from_pretrained(
149
+ name,
150
+ config=config,
151
+ trust_remote_code=True
152
+ )
153
+ ```
154
+
155
+ This model was trained with the MPT-7B-chat tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional ChatML tokens.
156
+
157
+ ```python
158
+ from transformers import AutoTokenizer
159
+ tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
160
+ ```
161
+
162
+ The model can then be used, for example, within a text-generation pipeline.
163
+ Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
164
+
165
+ ```python
166
+ from transformers import pipeline
167
+
168
+ with torch.autocast('cuda', dtype=torch.bfloat16):
169
+ inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
170
+ outputs = model.generate(**inputs, max_new_tokens=100)
171
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
172
+
173
+ # or using the HF pipeline
174
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
175
+ with torch.autocast('cuda', dtype=torch.bfloat16):
176
+ print(
177
+ pipe('Here is a recipe for vegan banana bread:\n',
178
+ max_new_tokens=100,
179
+ do_sample=True,
180
+ use_cache=True))
181
+ ```
182
+
183
+ ## Model Description
184
+
185
+ The architecture is a modification of a standard decoder-only transformer.
186
+
187
+ The model has been modified from a standard transformer in the following ways:
188
+ * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
189
+ * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
190
+ * It does not use biases
191
+
192
+
193
+ | Hyperparameter | Value |
194
+ |----------------|-------|
195
+ |n_parameters | 6.7B |
196
+ |n_layers | 32 |
197
+ | n_heads | 32 |
198
+ | d_model | 4096 |
199
+ | vocab size | 50432 |
200
+ | sequence length | 2048 |
201
+
202
+ ## Data Mix
203
+
204
+ The model was trained on the following data mix:
205
+
206
+ | Data Source | Number of Tokens in Source | Proportion |
207
+ |-------------|----------------------------|------------|
208
+ | Airoboros/GPT4-1.2 | 26.4M | 1.71% |
209
+ | Baize | 55.0M | 3.57% |
210
+ | Camel | 301M | 19.54% |
211
+ | GPTeacher | 7.56M | 0.49% |
212
+ | Guanaco | 15.6M | 1.02% |
213
+ | LongCoversations | 18.4M | 1.19% |
214
+ | ShareGPT | 821M | 53.24% |
215
+ | WizardLM | 297M | 19.23% |
216
+
217
+ "LongConversations" is a GPT3.5/4-generated dataset, details of which will be released at a later date.
218
+
219
+ ### Training Configuration
220
+
221
+ This model was trained on 192 H100s for about 48 minutes using the [MosaicML Platform](https://www.mosaicml.com/platform).
222
+ The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
223
+
224
+ ## Limitations and Biases
225
+
226
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
227
+
228
+ MPT-7B-Chat-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
229
+ MPT-7B-Chat-8k was trained on various public datasets.
230
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
231
+
232
+ ## Acknowledgements
233
+
234
+ This model was finetuned by the MosaicML NLP team
235
+
236
+ ## Disclaimer
237
+
238
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
239
+
240
+
241
+ ## MosaicML Platform
242
+
243
+ If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
244
+
245
+
246
+ ## Citation
247
+
248
+ Please cite this model using the following format:
249
+
250
+ ```
251
+ @online{MosaicML2023Introducing,
252
+ author = {MosaicML NLP Team},
253
+ title = {Introducing MPT-30B: Raising the bar
254
+ for open-source foundation models},
255
+ year = {2023},
256
+ url = {www.mosaicml.com/blog/mpt-30b},
257
+ note = {Accessed: 2023-06-22},
258
+ urldate = {2023-06-22}
259
+ }
260
+ ```
261
+
262
+ ***End of original Model File***
263
+ ---
264
+
265
+
266
+ ## Please consider to support my work
267
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
268
+
269
+ <center>
270
+
271
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
272
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
273
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
274
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
275
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
276
+
277
+ </center>