TheBloke commited on
Commit
ab4aab6
1 Parent(s): 2281226

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -9,17 +9,20 @@ quantized_by: TheBloke
9
  ---
10
 
11
  <!-- header start -->
12
- <div style="width: 100%;">
13
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
14
  </div>
15
  <div style="display: flex; justify-content: space-between; width: 100%;">
16
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
18
  </div>
19
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
21
  </div>
22
  </div>
 
 
23
  <!-- header end -->
24
 
25
  # Kimiko 13B - GPTQ
@@ -56,13 +59,13 @@ Each separate quant is in a different branch. See below for instructions on fet
56
 
57
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
58
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
59
- | [main](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/main) | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
60
- | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
61
- | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
62
- | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
63
- | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
64
- | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
65
- | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
66
  | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
67
 
68
  ## How to download from branches
@@ -106,7 +109,7 @@ from transformers import AutoTokenizer, pipeline, logging
106
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
107
 
108
  model_name_or_path = "TheBloke/Kimiko-13B-GPTQ"
109
- model_basename = "gptq_model-4bit-128g"
110
 
111
  use_triton = False
112
 
@@ -171,6 +174,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
171
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
172
 
173
  <!-- footer start -->
 
174
  ## Discord
175
 
176
  For further support, and discussions on these models and AI in general, join us at:
@@ -190,13 +194,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
190
  * Patreon: https://patreon.com/TheBlokeAI
191
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
192
 
193
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
194
 
195
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
196
 
197
 
198
  Thank you to all my generous patrons and donaters!
199
 
 
 
200
  <!-- footer end -->
201
 
202
  # Original model card: nRuaif's Kimiko 13B
@@ -218,7 +224,7 @@ This is my new Kimiko models, trained with LLaMA2-13B for...purpose
218
 
219
  - **Developed by:** nRuaif
220
  - **Model type:** Decoder only
221
- - **License:** CC BY-NC-SA
222
  - **Finetuned from model [optional]:** LLaMA 2
223
 
224
  ### Model Sources [optional]
@@ -271,11 +277,11 @@ All bias of this model come from LlaMA2 with an exception of NSFW bias.....
271
 
272
  3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
273
 
274
- ### Training Procedure
275
 
276
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
277
 
278
- Model is trained with 1 L4 from GCP costing a whooping 2.5USD
279
 
280
 
281
 
@@ -313,6 +319,6 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
313
  - **Hours used:** 5
314
  - **Cloud Provider:** GCP
315
  - **Compute Region:** US
316
- - **Carbon Emitted:** 0.5KG
317
 
318
 
 
9
  ---
10
 
11
  <!-- header start -->
12
+ <!-- 200823 -->
13
+ <div style="width: auto; margin-left: auto; margin-right: auto">
14
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
24
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
25
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
26
  <!-- header end -->
27
 
28
  # Kimiko 13B - GPTQ
 
59
 
60
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
61
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
62
+ | [main](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/main) | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
63
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
64
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
65
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
66
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
67
+ | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
68
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
69
  | [gptq-8bit-64g-actorder_True](https://huggingface.co/TheBloke/Kimiko-13B-GPTQ/tree/gptq-8bit-64g-actorder_True) | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
70
 
71
  ## How to download from branches
 
109
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
110
 
111
  model_name_or_path = "TheBloke/Kimiko-13B-GPTQ"
112
+ model_basename = "model"
113
 
114
  use_triton = False
115
 
 
174
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
175
 
176
  <!-- footer start -->
177
+ <!-- 200823 -->
178
  ## Discord
179
 
180
  For further support, and discussions on these models and AI in general, join us at:
 
194
  * Patreon: https://patreon.com/TheBlokeAI
195
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
196
 
197
+ **Special thanks to**: Aemon Algiz.
198
 
199
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
200
 
201
 
202
  Thank you to all my generous patrons and donaters!
203
 
204
+ And thank you again to a16z for their generous grant.
205
+
206
  <!-- footer end -->
207
 
208
  # Original model card: nRuaif's Kimiko 13B
 
224
 
225
  - **Developed by:** nRuaif
226
  - **Model type:** Decoder only
227
+ - **License:** CC BY-NC-SA
228
  - **Finetuned from model [optional]:** LLaMA 2
229
 
230
  ### Model Sources [optional]
 
277
 
278
  3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
279
 
280
+ ### Training Procedure
281
 
282
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
283
 
284
+ Model is trained with 1 L4 from GCP costing a whooping 2.5USD
285
 
286
 
287
 
 
319
  - **Hours used:** 5
320
  - **Cloud Provider:** GCP
321
  - **Compute Region:** US
322
+ - **Carbon Emitted:** 0.5KG
323
 
324
 
config.json CHANGED
@@ -1,26 +1,37 @@
1
  {
2
- "_name_or_path": "/workspace/process/lora_base/meta-llama_Llama-2-13b-hf",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 5120,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 13824,
12
- "max_position_embeddings": 4096,
13
- "model_type": "llama",
14
- "num_attention_heads": 40,
15
- "num_hidden_layers": 40,
16
- "num_key_value_heads": 40,
17
- "pad_token_id": 0,
18
- "pretraining_tp": 1,
19
- "rms_norm_eps": 1e-05,
20
- "rope_scaling": null,
21
- "tie_word_embeddings": false,
22
- "torch_dtype": "float16",
23
- "transformers_version": "4.32.0.dev0",
24
- "use_cache": true,
25
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
 
26
  }
 
1
  {
2
+ "_name_or_path": "/workspace/process/lora_base/meta-llama_Llama-2-13b-hf",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 4096,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 40,
15
+ "num_hidden_layers": 40,
16
+ "num_key_value_heads": 40,
17
+ "pad_token_id": 0,
18
+ "pretraining_tp": 1,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_scaling": null,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "float16",
23
+ "transformers_version": "4.32.0.dev0",
24
+ "use_cache": true,
25
+ "vocab_size": 32000,
26
+ "quantization_config": {
27
+ "bits": 4,
28
+ "group_size": 128,
29
+ "damp_percent": 0.1,
30
+ "desc_act": false,
31
+ "sym": true,
32
+ "true_sequential": true,
33
+ "model_name_or_path": null,
34
+ "model_file_base_name": "model",
35
+ "quant_method": "gptq"
36
+ }
37
  }
gptq_model-4bit-128g.safetensors → model.safetensors RENAMED
File without changes
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }