Update README.md
Browse files
README.md
CHANGED
@@ -26,41 +26,28 @@ This model requires the following prompt template:
|
|
26 |
<|assistant|>:
|
27 |
```
|
28 |
|
29 |
-
##
|
30 |
-
| Name | Quant method | Bits | Size | RAM required | Use case |
|
31 |
-
| ---- | ---- | ---- | ---- | ---- | ----- |
|
32 |
-
`OpenAssistant-30B-epoch7.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | Maximum compatibility |
|
33 |
-
`OpenAssistant-30B-epoch7.ggml.q4_2.bin` | q4_2 | 4bit | 20.3GB | 23GB | Best compromise between resources, speed and quality |
|
34 |
-
`OpenAssistant-30B-epoch7.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
|
35 |
-
`OpenAssistant-30B-epoch7.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
|
36 |
-
|
37 |
-
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
38 |
-
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
39 |
-
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
40 |
-
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|
41 |
-
|
42 |
-
## q4_2 compatibility
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
|
|
55 |
|
56 |
-
Don't expect any third-party UIs/tools to support them yet.
|
57 |
|
58 |
## How to run in `llama.cpp`
|
59 |
|
60 |
I use the following command line; adjust for your tastes and needs:
|
61 |
|
62 |
```
|
63 |
-
./main -t 18 -m OpenAssistant-30B-epoch7.ggml.
|
64 |
```
|
65 |
|
66 |
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
@@ -71,9 +58,9 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
|
|
71 |
|
72 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
73 |
|
74 |
-
Note: at this time text-generation-webui will not support the
|
75 |
|
76 |
-
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that
|
77 |
|
78 |
# Original model card
|
79 |
|
|
|
26 |
<|assistant|>:
|
27 |
```
|
28 |
|
29 |
+
## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
llama.cpp recently made a breaking change to its quantisation methods.
|
32 |
|
33 |
+
I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
|
34 |
|
35 |
+
The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
|
36 |
|
37 |
+
## Provided files
|
38 |
+
| Name | Quant method | Bits | Size | RAM required | Use case |
|
39 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
40 |
+
`OpenAssistant-30B-epoch7.ggml.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4-bit. |
|
41 |
+
`OpenAssistant-30B-epoch7.ggml.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
42 |
+
`OpenAssistant-30B-epoch7.ggml.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
|
43 |
|
|
|
44 |
|
45 |
## How to run in `llama.cpp`
|
46 |
|
47 |
I use the following command line; adjust for your tastes and needs:
|
48 |
|
49 |
```
|
50 |
+
./main -t 18 -m OpenAssistant-30B-epoch7.ggml.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
|
51 |
```
|
52 |
|
53 |
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
|
|
58 |
|
59 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
60 |
|
61 |
+
Note: at this time text-generation-webui will likely not support the newly updated llama.cpp quantisation methods.
|
62 |
|
63 |
+
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that you can likely get support for the new quantisation methods sooner.
|
64 |
|
65 |
# Original model card
|
66 |
|