TheBloke commited on
Commit
4c5a53a
1 Parent(s): 65e845f

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -11,17 +11,20 @@ pipeline_tag: text-generation
11
  ---
12
 
13
  <!-- header start -->
14
- <div style="width: 100%;">
15
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
16
  </div>
17
  <div style="display: flex; justify-content: space-between; width: 100%;">
18
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
20
  </div>
21
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
22
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
23
  </div>
24
  </div>
 
 
25
  <!-- header end -->
26
 
27
  # Open-Orca's OpenOrca-Preview1-13B GPTQ
@@ -172,6 +175,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
172
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
173
 
174
  <!-- footer start -->
 
175
  ## Discord
176
 
177
  For further support, and discussions on these models and AI in general, join us at:
@@ -191,12 +195,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
191
  * Patreon: https://patreon.com/TheBlokeAI
192
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
193
 
194
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
 
 
195
 
196
- **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
197
 
198
  Thank you to all my generous patrons and donaters!
199
 
 
 
200
  <!-- footer end -->
201
 
202
  # Original model card: Open-Orca's OpenOrca Preview1 200k GPT4 LLaMA 13B
@@ -276,7 +283,7 @@ Please await our full releases for further training details.
276
  ```
277
  ```bibtex
278
  @misc{mukherjee2023orca,
279
- title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
280
  author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
281
  year={2023},
282
  eprint={2306.02707},
@@ -286,7 +293,7 @@ Please await our full releases for further training details.
286
  ```
287
  ```bibtex
288
  @misc{longpre2023flan,
289
- title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
290
  author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
291
  year={2023},
292
  eprint={2301.13688},
 
11
  ---
12
 
13
  <!-- header start -->
14
+ <!-- 200823 -->
15
+ <div style="width: auto; margin-left: auto; margin-right: auto">
16
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
  </div>
18
  <div style="display: flex; justify-content: space-between; width: 100%;">
19
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
21
  </div>
22
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
24
  </div>
25
  </div>
26
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
27
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
28
  <!-- header end -->
29
 
30
  # Open-Orca's OpenOrca-Preview1-13B GPTQ
 
175
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
176
 
177
  <!-- footer start -->
178
+ <!-- 200823 -->
179
  ## Discord
180
 
181
  For further support, and discussions on these models and AI in general, join us at:
 
195
  * Patreon: https://patreon.com/TheBlokeAI
196
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
197
 
198
+ **Special thanks to**: Aemon Algiz.
199
+
200
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
201
 
 
202
 
203
  Thank you to all my generous patrons and donaters!
204
 
205
+ And thank you again to a16z for their generous grant.
206
+
207
  <!-- footer end -->
208
 
209
  # Original model card: Open-Orca's OpenOrca Preview1 200k GPT4 LLaMA 13B
 
283
  ```
284
  ```bibtex
285
  @misc{mukherjee2023orca,
286
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
287
  author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
288
  year={2023},
289
  eprint={2306.02707},
 
293
  ```
294
  ```bibtex
295
  @misc{longpre2023flan,
296
+ title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
297
  author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
298
  year={2023},
299
  eprint={2301.13688},
config.json CHANGED
@@ -1,24 +1,34 @@
1
  {
2
- "_name_or_path": "/home/sgugger/tmp/llama/llama-13b/",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 13824,
12
- "max_position_embeddings": 2048,
13
- "max_sequence_length": 2048,
14
- "model_type": "llama",
15
- "num_attention_heads": 40,
16
- "num_hidden_layers": 40,
17
- "pad_token_id": 0,
18
- "rms_norm_eps": 1e-06,
19
- "tie_word_embeddings": false,
20
- "torch_dtype": "float16",
21
- "transformers_version": "4.28.0.dev0",
22
- "use_cache": true,
23
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
24
  }
 
1
  {
2
+ "_name_or_path": "/home/sgugger/tmp/llama/llama-13b/",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 2048,
13
+ "max_sequence_length": 2048,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 40,
16
+ "num_hidden_layers": 40,
17
+ "pad_token_id": 0,
18
+ "rms_norm_eps": 1e-06,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "float16",
21
+ "transformers_version": "4.28.0.dev0",
22
+ "use_cache": true,
23
+ "vocab_size": 32000,
24
+ "quantization_config": {
25
+ "bits": 4,
26
+ "group_size": 128,
27
+ "damp_percent": 0.01,
28
+ "desc_act": false,
29
+ "sym": true,
30
+ "true_sequential": true,
31
+ "model_file_base_name": "model",
32
+ "quant_method": "gptq"
33
+ }
34
  }
openorca-preview1-200k-llama-13b-GPTQ-4bit-128g.no-act.order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:460b079c9e8765c7005c4b141959182fb13b6838dc9f2ac6ab63427165b2e338
3
- size 7454797160
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bca6d13c3f62493bad777b3b3dfd7d2669dead4fcf2d4c5ba7a35cf503ec593b
3
+ size 7454797216
quantize_config.json CHANGED
@@ -1,8 +1,9 @@
1
  {
2
- "bits": 4,
3
- "group_size": 128,
4
- "damp_percent": 0.01,
5
- "desc_act": false,
6
- "sym": true,
7
- "true_sequential": true
 
8
  }
 
1
  {
2
+ "bits": 4,
3
+ "group_size": 128,
4
+ "damp_percent": 0.01,
5
+ "desc_act": false,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_file_base_name": "model"
9
  }