TheBloke commited on
Commit
8d4ab53
1 Parent(s): 5593427

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -93,15 +93,8 @@ Below is an instruction that describes a task. Write a response that appropriate
93
  ```
94
 
95
  <!-- prompt-template end -->
96
- <!-- licensing start -->
97
- ## Licensing
98
 
99
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
100
 
101
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
102
-
103
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Elinas' Chronos 33B](https://huggingface.co/elinas/chronos-33b).
104
- <!-- licensing end -->
105
  <!-- compatibility_gguf start -->
106
  ## Compatibility
107
 
@@ -160,7 +153,7 @@ The following clients/libraries will automatically download models for you, prov
160
 
161
  ### In `text-generation-webui`
162
 
163
- Under Download Model, you can enter the model repo: TheBloke/chronos-33b-GGUF and below it, a specific filename to download, such as: chronos-33b.q4_K_M.gguf.
164
 
165
  Then click Download.
166
 
@@ -175,7 +168,7 @@ pip3 install huggingface-hub>=0.17.1
175
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
176
 
177
  ```shell
178
- huggingface-cli download TheBloke/chronos-33b-GGUF chronos-33b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
179
  ```
180
 
181
  <details>
@@ -198,7 +191,7 @@ pip3 install hf_transfer
198
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
199
 
200
  ```shell
201
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/chronos-33b-GGUF chronos-33b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
202
  ```
203
 
204
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -211,7 +204,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
211
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
212
 
213
  ```shell
214
- ./main -ngl 32 -m chronos-33b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
215
  ```
216
 
217
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -251,7 +244,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
251
  from ctransformers import AutoModelForCausalLM
252
 
253
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
254
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-33b-GGUF", model_file="chronos-33b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
255
 
256
  print(llm("AI is going to"))
257
  ```
 
93
  ```
94
 
95
  <!-- prompt-template end -->
 
 
96
 
 
97
 
 
 
 
 
98
  <!-- compatibility_gguf start -->
99
  ## Compatibility
100
 
 
153
 
154
  ### In `text-generation-webui`
155
 
156
+ Under Download Model, you can enter the model repo: TheBloke/chronos-33b-GGUF and below it, a specific filename to download, such as: chronos-33b.Q4_K_M.gguf.
157
 
158
  Then click Download.
159
 
 
168
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
169
 
170
  ```shell
171
+ huggingface-cli download TheBloke/chronos-33b-GGUF chronos-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
172
  ```
173
 
174
  <details>
 
191
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
192
 
193
  ```shell
194
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/chronos-33b-GGUF chronos-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
195
  ```
196
 
197
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
204
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
205
 
206
  ```shell
207
+ ./main -ngl 32 -m chronos-33b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
208
  ```
209
 
210
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
244
  from ctransformers import AutoModelForCausalLM
245
 
246
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
247
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/chronos-33b-GGUF", model_file="chronos-33b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
248
 
249
  print(llm("AI is going to"))
250
  ```