TheBloke commited on
Commit
17f9ea5
1 Parent(s): 3721281

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +148 -0
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Austism's Chronos WizardLM UC Scot ST 13B GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [Austism's Chronos WizardLM UC Scot ST 13B](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ)
29
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GGML)
30
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b)
31
+
32
+ ## How to easily download and use this model in text-generation-webui
33
+
34
+ Please make sure you're using the latest version of text-generation-webui
35
+
36
+ 1. Click the **Model tab**.
37
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ`.
38
+ 3. Click **Download**.
39
+ 4. The model will start downloading, and once finished it will be automatically loaded.
40
+ 5. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
41
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
42
+ 6. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
43
+
44
+ ## How to use this GPTQ model from Python code
45
+
46
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
47
+
48
+ `pip install auto-gptq`
49
+
50
+ Then try the following example code:
51
+
52
+ ```python
53
+ from transformers import AutoTokenizer, pipeline, logging
54
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
55
+ import argparse
56
+
57
+ model_name_or_path = "TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ"
58
+ model_basename = "chronos-wizardlm-uc-scot-st-13B-GPTQ-4bit-128g.no-act.order"
59
+
60
+ use_triton = False
61
+
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
63
+
64
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
65
+ model_basename=model_basename,
66
+ use_safetensors=True,
67
+ trust_remote_code=True,
68
+ device="cuda:0",
69
+ use_triton=use_triton,
70
+ quantize_config=None)
71
+
72
+ print("\n\n*** Generate:")
73
+
74
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
75
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
76
+ print(tokenizer.decode(output[0]))
77
+
78
+ # Inference can also be done using transformers' pipeline
79
+
80
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
81
+ logging.set_verbosity(logging.CRITICAL)
82
+
83
+ prompt = "Tell me about AI"
84
+ prompt_template=f'''### Human: {prompt}
85
+ ### Assistant:'''
86
+
87
+ print("*** Pipeline:")
88
+ pipe = pipeline(
89
+ "text-generation",
90
+ model=model,
91
+ tokenizer=tokenizer,
92
+ max_new_tokens=512,
93
+ temperature=0.7,
94
+ top_p=0.95,
95
+ repetition_penalty=1.15
96
+ )
97
+
98
+ print(pipe(prompt_template)[0]['generated_text'])
99
+ ```
100
+
101
+ ## Provided files
102
+
103
+ **chronos-wizardlm-uc-scot-st-13B-GPTQ-4bit-128g.no-act.order.safetensors**
104
+
105
+ This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
106
+
107
+ It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
108
+
109
+ * `chronos-wizardlm-uc-scot-st-13B-GPTQ-4bit-128g.no-act.order.safetensors`
110
+ * Works with AutoGPTQ in CUDA or Triton modes.
111
+ * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
112
+ * Works with text-generation-webui, including one-click-installers.
113
+ * Parameters: Groupsize = 128. Act Order / desc_act = False.
114
+
115
+ <!-- footer start -->
116
+ ## Discord
117
+
118
+ For further support, and discussions on these models and AI in general, join us at:
119
+
120
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
121
+
122
+ ## Thanks, and how to contribute.
123
+
124
+ Thanks to the [chirper.ai](https://chirper.ai) team!
125
+
126
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
127
+
128
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
129
+
130
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
131
+
132
+ * Patreon: https://patreon.com/TheBlokeAI
133
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
134
+
135
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
136
+
137
+ **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
138
+
139
+ Thank you to all my generous patrons and donaters!
140
+
141
+ <!-- footer end -->
142
+
143
+ # Original model card: Austism's Chronos WizardLM UC Scot ST 13B
144
+
145
+ (chronos-13b+(WizardLM Uncensored+CoT+Storytelling))
146
+ 80/20 merge
147
+
148
+ intended to be much like chronos with different writing and instruction following capabilities.