Update README.md
Browse files
README.md
CHANGED
@@ -14,8 +14,6 @@ llama.cpp recently made a breaking change to its quantisation methods.
|
|
14 |
|
15 |
I have quantised the GGML files in this repo with the latest version.
|
16 |
|
17 |
-
Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
|
18 |
-
|
19 |
## How to run in `text-generation-webui`
|
20 |
|
21 |
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
|
|
|
14 |
|
15 |
I have quantised the GGML files in this repo with the latest version.
|
16 |
|
|
|
|
|
17 |
## How to run in `text-generation-webui`
|
18 |
|
19 |
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
|