TheBloke commited on
Commit
2eb277e
1 Parent(s): 54b9a6d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -58
README.md CHANGED
@@ -3,8 +3,12 @@ inference: false
3
  language:
4
  - en
5
  license: other
 
 
 
6
  model_type: llama
7
  pipeline_tag: text-generation
 
8
  tags:
9
  - facebook
10
  - meta
@@ -14,49 +18,52 @@ tags:
14
  ---
15
 
16
  <!-- header start -->
17
- <div style="width: 100%;">
18
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
19
  </div>
20
  <div style="display: flex; justify-content: space-between; width: 100%;">
21
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
23
  </div>
24
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
25
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
26
  </div>
27
  </div>
 
 
28
  <!-- header end -->
29
 
30
  # Llama 2 70B Chat - GGML
31
- - Model creator: [Meta](https://huggingface.co/meta-llama)
32
- - Original model: [Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat)
33
 
34
  ## Description
35
 
36
- This repo contains GGML format model files for [Meta's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat).
37
 
38
- ## Only compatible with latest llama.cpp
39
 
40
- To use these files you need:
41
 
42
- 1. llama.cpp as of [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later.
43
- - For users who don't want to compile from source, you can use the binaries from [release master-e76d630](https://github.com/ggerganov/llama.cpp/releases/tag/master-e76d630)
44
- 2. to add new command line parameter `-gqa 8`
45
 
46
- Example command:
47
- ```
48
- /workspace/git/llama.cpp/main -m llama-2-70b-chat/ggml/llama-2-70b-chat.ggmlv3.q4_0.bin -gqa 8 -t 13 -p "[INST] <<SYS>>You are a helpful assistant<</SYS>>Write a story about llamas[/INST]"
49
- ```
50
 
51
- There is no CUDA support at this time, but it should be coming soon.
52
-
53
- There is no support in third-party UIs or Python libraries (llama-cpp-python, ctransformers) yet. That will come in due course.
 
 
 
 
54
 
55
  ## Repositories available
56
 
57
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
58
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU only inference](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML)
59
- * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat)
 
60
 
61
  ## Prompt template: Llama-2-Chat
62
 
@@ -65,16 +72,23 @@ There is no support in third-party UIs or Python libraries (llama-cpp-python, ct
65
  You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
66
  <</SYS>>
67
  {prompt}[/INST]
 
68
  ```
69
 
70
  <!-- compatibility_ggml start -->
71
  ## Compatibility
72
 
73
- ### Only compatible with llama.cpp as of commit `e76d630`
74
 
75
- Compatible with llama.cpp as of [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later.
76
 
77
- For a pre-compiled release, use [release master-e76d630](https://github.com/ggerganov/llama.cpp/releases/tag/master-e76d630) or later.
 
 
 
 
 
 
78
 
79
  ## Explanation of the new k-quant methods
80
  <details>
@@ -93,56 +107,52 @@ Refer to the Provided Files table below to see what files use which methods, and
93
  <!-- compatibility_ggml end -->
94
 
95
  ## Provided files
 
96
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
97
  | ---- | ---- | ---- | ---- | ---- | ----- |
98
  | llama-2-70b-chat.ggmlv3.q2_K.bin | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
99
- | llama-2-70b-chat.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
100
- | llama-2-70b-chat.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
101
  | llama-2-70b-chat.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
102
  | llama-2-70b-chat.ggmlv3.q4_0.bin | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
103
- | llama-2-70b-chat.ggmlv3.q4_1.bin | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
104
- | llama-2-70b-chat.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
105
  | llama-2-70b-chat.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
106
  | llama-2-70b-chat.ggmlv3.q5_0.bin | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
107
- | llama-2-70b-chat.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
108
  | llama-2-70b-chat.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
109
- | llama-2-70b-chat.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
110
- | llama-2-70b-chat.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
 
111
 
112
- ### q6_K and q8_0 files require expansion from archive
113
 
114
- **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
115
 
116
- ### q6_K
117
- Please download:
118
- * `llama-2-70b-chat.ggmlv3.q6_K.zip`
119
- * `llama-2-70b-chat.ggmlv3.q6_K.z01`
120
 
121
- ### q8_0
122
- Please download:
123
- * `llama-2-70b-chat.ggmlv3.q8_0.zip`
124
- * `llama-2-70b-chat.ggmlv3.q8_0.z01`
125
 
126
- Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
127
  ```
128
- sudo apt update -y && sudo apt install 7zip
129
- 7zz x llama-2-70b-chat.ggmlv3.q6_K.zip
130
  ```
 
131
 
132
- Once the `.bin` is extracted you can delete the `.zip` and `.z01` files.
133
 
134
- ## How to run in `llama.cpp`
135
 
136
- I use the following command line; adjust for your tastes and needs:
137
 
138
- ```
139
- ./main -m llama-2-70b-chat/ggml/llama-2-70b-chat.ggmlv3.q4_0.bin -gqa 8 -t 13 -p "[INST] <<SYS>>You are a helpful assistant<</SYS>>Write a story about llamas[/INST]"
140
- ```
141
- Change `-t 13` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
142
 
143
- No GPU support is possible yet, but it is coming soon.
144
 
145
  <!-- footer start -->
 
146
  ## Discord
147
 
148
  For further support, and discussions on these models and AI in general, join us at:
@@ -162,18 +172,21 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
162
  * Patreon: https://patreon.com/TheBlokeAI
163
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
164
 
165
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
 
 
166
 
167
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
168
 
169
  Thank you to all my generous patrons and donaters!
170
 
 
 
171
  <!-- footer end -->
172
 
173
- # Original model card: Meta's Llama 2 70B Chat
174
 
175
  # **Llama 2**
176
- Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases. Links to other models can be found in the index at the bottom.
177
 
178
  ## Model Details
179
  *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
@@ -210,8 +223,6 @@ Meta developed and publicly released the Llama 2 family of large language models
210
  ## Intended Use
211
  **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
212
 
213
- To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
214
-
215
  **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
216
 
217
  ## Hardware and Software
 
3
  language:
4
  - en
5
  license: other
6
+ model_creator: Meta Llama 2
7
+ model_link: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
8
+ model_name: Llama 2 70B Chat
9
  model_type: llama
10
  pipeline_tag: text-generation
11
+ quantized_by: TheBloke
12
  tags:
13
  - facebook
14
  - meta
 
18
  ---
19
 
20
  <!-- header start -->
21
+ <!-- 200823 -->
22
+ <div style="width: auto; margin-left: auto; margin-right: auto">
23
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
24
  </div>
25
  <div style="display: flex; justify-content: space-between; width: 100%;">
26
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
27
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
28
  </div>
29
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
31
  </div>
32
  </div>
33
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
34
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
35
  <!-- header end -->
36
 
37
  # Llama 2 70B Chat - GGML
38
+ - Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
39
+ - Original model: [Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
40
 
41
  ## Description
42
 
43
+ This repo contains GGML format model files for [Meta Llama 2's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).
44
 
45
+ ### Important note regarding GGML files.
46
 
47
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
48
 
49
+ Please use the GGUF models instead.
 
 
50
 
51
+ ### About GGML
 
 
 
52
 
53
+ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
54
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
55
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
56
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
57
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models.
58
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
59
+ * [ctransformers](https://github.com/marella/ctransformers), version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server.
60
 
61
  ## Repositories available
62
 
63
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
64
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF)
65
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML)
66
+ * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
67
 
68
  ## Prompt template: Llama-2-Chat
69
 
 
72
  You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
73
  <</SYS>>
74
  {prompt}[/INST]
75
+
76
  ```
77
 
78
  <!-- compatibility_ggml start -->
79
  ## Compatibility
80
 
81
+ ### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
82
 
83
+ Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
84
 
85
+ For compatibility with latest llama.cpp, please use GGUF files instead.
86
+
87
+ Or one of the other tools and libraries listed above.
88
+
89
+ To use in llama.cpp, you must add `-gqa 8` argument.
90
+
91
+ For other UIs and libraries, please check the docs.
92
 
93
  ## Explanation of the new k-quant methods
94
  <details>
 
107
  <!-- compatibility_ggml end -->
108
 
109
  ## Provided files
110
+
111
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
112
  | ---- | ---- | ---- | ---- | ---- | ----- |
113
  | llama-2-70b-chat.ggmlv3.q2_K.bin | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
114
  | llama-2-70b-chat.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
115
+ | llama-2-70b-chat.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
116
+ | llama-2-70b-chat.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
117
  | llama-2-70b-chat.ggmlv3.q4_0.bin | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
 
 
118
  | llama-2-70b-chat.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
119
+ | llama-2-70b-chat.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
120
+ | llama-2-70b-chat.ggmlv3.q4_1.bin | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
121
  | llama-2-70b-chat.ggmlv3.q5_0.bin | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
122
  | llama-2-70b-chat.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
123
+ | llama-2-70b-chat.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
124
+
125
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
126
 
127
+ ## How to run in `llama.cpp`
128
 
129
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
130
 
131
+ For compatibility with latest llama.cpp, please use GGUF files instead.
 
 
 
132
 
133
+ I use the following command line; adjust for your tastes and needs:
 
 
 
134
 
 
135
  ```
136
+ ./main -t 10 -ngl 40 -gqa 8 -m llama-2-70b-chat.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\nWrite a story about llamas[/INST]"
 
137
  ```
138
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
139
 
140
+ Change `-ngl 40` to the number of GPU layers you have VRAM for. Use `-ngl 100` to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
141
 
142
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
143
 
144
+ Remember the `-gqa 8` argument, required for Llama 70B models.
145
 
146
+ Change `-c 4096` to the desired sequence length for this model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
147
+
148
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
149
+
150
+ ## How to run in `text-generation-webui`
151
 
152
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
153
 
154
  <!-- footer start -->
155
+ <!-- 200823 -->
156
  ## Discord
157
 
158
  For further support, and discussions on these models and AI in general, join us at:
 
172
  * Patreon: https://patreon.com/TheBlokeAI
173
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
174
 
175
+ **Special thanks to**: Aemon Algiz.
176
+
177
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
178
 
 
179
 
180
  Thank you to all my generous patrons and donaters!
181
 
182
+ And thank you again to a16z for their generous grant.
183
+
184
  <!-- footer end -->
185
 
186
+ # Original model card: Meta Llama 2's Llama 2 70B Chat
187
 
188
  # **Llama 2**
189
+ Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
190
 
191
  ## Model Details
192
  *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
 
223
  ## Intended Use
224
  **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
225
 
 
 
226
  **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
227
 
228
  ## Hardware and Software