Initial GGML model commit
Browse files
README.md
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# CalderAI's 30B Lazarus GGML
|
21 |
+
|
22 |
+
These files are GGML format model files for [CalderAI's 30B Lazarus](https://huggingface.co/CalderaAI/30B-Lazarus).
|
23 |
+
|
24 |
+
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
25 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
26 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
|
27 |
+
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
|
28 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
29 |
+
* [ctransformers](https://github.com/marella/ctransformers)
|
30 |
+
|
31 |
+
## Repositories available
|
32 |
+
|
33 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/CalderaAI/30B-Lazarus-GPTQ4bit)
|
34 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/30B-Lazarus-GGML)
|
35 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/30B-Lazarus)
|
36 |
+
|
37 |
+
<!-- compatibility_ggml start -->
|
38 |
+
## Compatibility
|
39 |
+
|
40 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
41 |
+
|
42 |
+
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
|
43 |
+
|
44 |
+
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
|
45 |
+
|
46 |
+
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
47 |
+
|
48 |
+
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
|
49 |
+
|
50 |
+
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
|
51 |
+
|
52 |
+
## Explanation of the new k-quant methods
|
53 |
+
|
54 |
+
The new methods available are:
|
55 |
+
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
56 |
+
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
57 |
+
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
58 |
+
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
59 |
+
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
60 |
+
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
61 |
+
|
62 |
+
Refer to the Provided Files table below to see what files use which methods, and how.
|
63 |
+
<!-- compatibility_ggml end -->
|
64 |
+
|
65 |
+
## Provided files
|
66 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
67 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
68 |
+
| 30b-Lazarus.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB | 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
69 |
+
| 30b-Lazarus.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB | 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
70 |
+
| 30b-Lazarus.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB | 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
71 |
+
| 30b-Lazarus.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB | 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
72 |
+
| 30b-Lazarus.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
|
73 |
+
| 30b-Lazarus.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
74 |
+
| 30b-Lazarus.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB | 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
75 |
+
| 30b-Lazarus.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB | 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
76 |
+
| 30b-Lazarus.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
77 |
+
| 30b-Lazarus.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
78 |
+
| 30b-Lazarus.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB | 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
79 |
+
| 30b-Lazarus.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB | 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
80 |
+
| 30b-Lazarus.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
81 |
+
| 30b-Lazarus.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
82 |
+
|
83 |
+
|
84 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
85 |
+
|
86 |
+
## How to run in `llama.cpp`
|
87 |
+
|
88 |
+
I use the following command line; adjust for your tastes and needs:
|
89 |
+
|
90 |
+
```
|
91 |
+
./main -t 10 -ngl 32 -m 30b-Lazarus.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
|
92 |
+
```
|
93 |
+
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
94 |
+
|
95 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
96 |
+
|
97 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
98 |
+
|
99 |
+
## How to run in `text-generation-webui`
|
100 |
+
|
101 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
102 |
+
|
103 |
+
<!-- footer start -->
|
104 |
+
## Discord
|
105 |
+
|
106 |
+
For further support, and discussions on these models and AI in general, join us at:
|
107 |
+
|
108 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
109 |
+
|
110 |
+
## Thanks, and how to contribute.
|
111 |
+
|
112 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
113 |
+
|
114 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
115 |
+
|
116 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
117 |
+
|
118 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
119 |
+
|
120 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
121 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
122 |
+
|
123 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
124 |
+
|
125 |
+
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
|
126 |
+
|
127 |
+
Thank you to all my generous patrons and donaters!
|
128 |
+
|
129 |
+
<!-- footer end -->
|
130 |
+
|
131 |
+
# Original model card: CalderAI's 30B Lazarus
|
132 |
+
|
133 |
+
|
134 |
+
## 30B-Lazarus
|
135 |
+
|
136 |
+
## Composition:
|
137 |
+
[] = applied as LoRA to a composite model | () = combined as composite models
|
138 |
+
|
139 |
+
[SuperCOT([gtp4xalpaca(manticorechatpygalpha+vicunaunlocked)]+[StoryV2(kaiokendev-SuperHOT-LoRA-prototype30b-8192)])]
|
140 |
+
|
141 |
+
This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for.
|
142 |
+
The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior.
|
143 |
+
|
144 |
+
Potential limitations - LoRAs applied on top of each other may intercompete.
|
145 |
+
|
146 |
+
Subjective results - very promising. Further experimental tests and objective tests are required.
|
147 |
+
|
148 |
+
Instruct and Setup Suggestions:
|
149 |
+
|
150 |
+
Alpaca instruct is primary, Vicuna instruct format may work.
|
151 |
+
If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory.
|
152 |
+
Other presets as well as custom settings can yield highly different results, especially Temperature.
|
153 |
+
If poking it with a stick doesn't work try poking harder.
|
154 |
+
|
155 |
+
## Language Models and LoRAs Used Credits:
|
156 |
+
|
157 |
+
manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
|
158 |
+
|
159 |
+
https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha
|
160 |
+
|
161 |
+
SuperCOT-LoRA [30B] by kaiokendev
|
162 |
+
|
163 |
+
https://huggingface.co/kaiokendev/SuperCOT-LoRA
|
164 |
+
|
165 |
+
Storytelling-LLaMa-LoRA [30B, Version 2] by GamerUnTouch
|
166 |
+
|
167 |
+
https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
|
168 |
+
|
169 |
+
SuperHOT Prototype [30b 8k ctx] by kaiokendev
|
170 |
+
|
171 |
+
https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype
|
172 |
+
|
173 |
+
ChanSung's GPT4-Alpaca-LoRA
|
174 |
+
https://huggingface.co/chansung/gpt4-alpaca-lora-30b
|
175 |
+
|
176 |
+
Neko-Institute-of-Science's Vicuna Unlocked LoRA (Checkpoint 46080)
|
177 |
+
https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA
|
178 |
+
|
179 |
+
Also thanks to Meta for LLaMA.
|
180 |
+
|
181 |
+
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
182 |
+
Thanks to each and every one of you for your incredible work developing some of the best things
|
183 |
+
to come out of this community.
|