TheBloke commited on
Commit
0429c09
1 Parent(s): cc997dc

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -91,15 +91,8 @@ Below is an instruction that describes a task. Write a response that appropriate
91
  ```
92
 
93
  <!-- prompt-template end -->
94
- <!-- licensing start -->
95
- ## Licensing
96
 
97
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
98
 
99
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
100
-
101
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Austism's Chronos Wizardlm Uc Scot St 13B](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b).
102
- <!-- licensing end -->
103
  <!-- compatibility_gguf start -->
104
  ## Compatibility
105
 
@@ -158,7 +151,7 @@ The following clients/libraries will automatically download models for you, prov
158
 
159
  ### In `text-generation-webui`
160
 
161
- Under Download Model, you can enter the model repo: TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF and below it, a specific filename to download, such as: chronos-wizardlm-uc-scot-st-13B.q4_K_M.gguf.
162
 
163
  Then click Download.
164
 
@@ -173,7 +166,7 @@ pip3 install huggingface-hub>=0.17.1
173
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
174
 
175
  ```shell
176
- huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF chronos-wizardlm-uc-scot-st-13B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
177
  ```
178
 
179
  <details>
@@ -196,7 +189,7 @@ pip3 install hf_transfer
196
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
197
 
198
  ```shell
199
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF chronos-wizardlm-uc-scot-st-13B.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
200
  ```
201
 
202
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -209,7 +202,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
209
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
210
 
211
  ```shell
212
- ./main -ngl 32 -m chronos-wizardlm-uc-scot-st-13B.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
213
  ```
214
 
215
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -249,7 +242,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
249
  from ctransformers import AutoModelForCausalLM
250
 
251
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
252
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF", model_file="chronos-wizardlm-uc-scot-st-13B.q4_K_M.gguf", model_type="llama", gpu_layers=50)
253
 
254
  print(llm("AI is going to"))
255
  ```
 
91
  ```
92
 
93
  <!-- prompt-template end -->
 
 
94
 
 
95
 
 
 
 
 
96
  <!-- compatibility_gguf start -->
97
  ## Compatibility
98
 
 
151
 
152
  ### In `text-generation-webui`
153
 
154
+ Under Download Model, you can enter the model repo: TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF and below it, a specific filename to download, such as: chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf.
155
 
156
  Then click Download.
157
 
 
166
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
167
 
168
  ```shell
169
+ huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
170
  ```
171
 
172
  <details>
 
189
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
190
 
191
  ```shell
192
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
193
  ```
194
 
195
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
202
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
203
 
204
  ```shell
205
+ ./main -ngl 32 -m chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
206
  ```
207
 
208
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
242
  from ctransformers import AutoModelForCausalLM
243
 
244
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
245
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-wizardlm-uc-scot-st-13B-GGUF", model_file="chronos-wizardlm-uc-scot-st-13B.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
246
 
247
  print(llm("AI is going to"))
248
  ```