Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,13 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
|
|
28 |
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-33B-GGML)
|
29 |
* [Original unquantised fp16 model in HF format](https://huggingface.co/timdettmers/guanaco-33b-merged)
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
## How to easily download and use this model in text-generation-webui
|
32 |
|
33 |
Open the text-generation-webui UI as normal.
|
@@ -88,6 +95,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
88 |
|
89 |
Thank you to all my generous patrons and donaters!
|
90 |
<!-- footer end -->
|
|
|
91 |
# Original model card
|
92 |
|
93 |
Not provided by original model creator.
|
|
|
28 |
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-33B-GGML)
|
29 |
* [Original unquantised fp16 model in HF format](https://huggingface.co/timdettmers/guanaco-33b-merged)
|
30 |
|
31 |
+
## Prompt template
|
32 |
+
|
33 |
+
```
|
34 |
+
### Human: prompt
|
35 |
+
### Assistant:
|
36 |
+
```
|
37 |
+
|
38 |
## How to easily download and use this model in text-generation-webui
|
39 |
|
40 |
Open the text-generation-webui UI as normal.
|
|
|
95 |
|
96 |
Thank you to all my generous patrons and donaters!
|
97 |
<!-- footer end -->
|
98 |
+
|
99 |
# Original model card
|
100 |
|
101 |
Not provided by original model creator.
|