Upload README.md
Browse files
README.md
CHANGED
@@ -1,70 +1,77 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
-
license:
|
|
|
|
|
|
|
4 |
model_type: llama
|
|
|
5 |
---
|
6 |
|
7 |
<!-- header start -->
|
8 |
-
|
9 |
-
|
|
|
10 |
</div>
|
11 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
12 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
13 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
14 |
</div>
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
16 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
17 |
</div>
|
18 |
</div>
|
|
|
|
|
19 |
<!-- header end -->
|
20 |
|
21 |
-
#
|
|
|
|
|
22 |
|
23 |
-
|
24 |
|
25 |
-
GGML files
|
26 |
-
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
|
27 |
-
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
|
28 |
-
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
|
29 |
-
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
|
30 |
-
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
|
31 |
-
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
-
|
39 |
-
-c <contextsize> --rope-freq-base 10000 --rope-freq-scale 0.5"
|
40 |
-
```
|
41 |
|
42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
## Repositories available
|
45 |
|
46 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLongMA-2-7B-GPTQ)
|
47 |
-
* [2, 3, 4, 5, 6 and 8-bit
|
48 |
-
* [
|
|
|
49 |
|
50 |
## Prompt template: None
|
51 |
|
52 |
```
|
53 |
{prompt}
|
|
|
54 |
```
|
55 |
|
56 |
<!-- compatibility_ggml start -->
|
57 |
## Compatibility
|
58 |
|
59 |
-
|
60 |
|
61 |
-
|
62 |
|
63 |
-
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
|
68 |
|
69 |
## Explanation of the new k-quant methods
|
70 |
<details>
|
@@ -83,43 +90,51 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
83 |
<!-- compatibility_ggml end -->
|
84 |
|
85 |
## Provided files
|
|
|
86 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
87 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
88 |
-
| llongma-2-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
89 |
-
| llongma-2-7b.ggmlv3.
|
90 |
-
| llongma-2-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
91 |
-
| llongma-2-7b.ggmlv3.
|
92 |
-
| llongma-2-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
|
93 |
-
| llongma-2-7b.ggmlv3.
|
94 |
-
| llongma-2-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
95 |
-
| llongma-2-7b.ggmlv3.
|
96 |
-
| llongma-2-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
97 |
-
| llongma-2-7b.ggmlv3.
|
98 |
-
| llongma-2-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
99 |
-
| llongma-2-7b.ggmlv3.
|
100 |
-
| llongma-2-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
101 |
-
| llongma-2-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
102 |
|
103 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
104 |
|
105 |
## How to run in `llama.cpp`
|
106 |
|
107 |
-
|
|
|
|
|
108 |
|
109 |
```
|
110 |
-
./main -t 10 -ngl 32 -m llongma-2-7b.ggmlv3.
|
111 |
```
|
112 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
113 |
|
114 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
115 |
|
|
|
|
|
116 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
117 |
|
|
|
|
|
118 |
## How to run in `text-generation-webui`
|
119 |
|
120 |
-
Further instructions here: [text-generation-webui/docs/llama.cpp
|
121 |
|
122 |
<!-- footer start -->
|
|
|
123 |
## Discord
|
124 |
|
125 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -139,17 +154,20 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
139 |
* Patreon: https://patreon.com/TheBlokeAI
|
140 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
141 |
|
142 |
-
**Special thanks to**:
|
143 |
|
144 |
-
**Patreon special mentions**:
|
145 |
|
146 |
|
147 |
Thank you to all my generous patrons and donaters!
|
148 |
|
|
|
|
|
149 |
<!-- footer end -->
|
150 |
|
151 |
# Original model card: ConceptofMind's LLongMA 2 7B
|
152 |
|
|
|
153 |
LLongMA-2, a suite of Llama-2 models, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with Emozilla of NousResearch and Kaiokendev.
|
154 |
|
155 |
We worked directly with Kaiokendev, to extend the context length of the Llama-2 7b model through fine-tuning. The models pass all our evaluations and maintain the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies.
|
@@ -189,3 +207,55 @@ The previous suite of LLongMA model releases can be found here: https://twitter.
|
|
189 |
All of the models can be found on Huggingface: https://huggingface.co/conceptofmind
|
190 |
|
191 |
You can find the Llama-2 usage policy here: https://ai.meta.com/llama/use-policy/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
+
license: llama2
|
4 |
+
model_creator: Enrico Shippole
|
5 |
+
model_link: https://huggingface.co/conceptofmind/LLongMA-2-7b
|
6 |
+
model_name: LLongMA 2 7B
|
7 |
model_type: llama
|
8 |
+
quantized_by: TheBloke
|
9 |
---
|
10 |
|
11 |
<!-- header start -->
|
12 |
+
<!-- 200823 -->
|
13 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
14 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
15 |
</div>
|
16 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
19 |
</div>
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
22 |
</div>
|
23 |
</div>
|
24 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
25 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
26 |
<!-- header end -->
|
27 |
|
28 |
+
# LLongMA 2 7B - GGML
|
29 |
+
- Model creator: [Enrico Shippole](https://huggingface.co/conceptofmind)
|
30 |
+
- Original model: [LLongMA 2 7B](https://huggingface.co/conceptofmind/LLongMA-2-7b)
|
31 |
|
32 |
+
## Description
|
33 |
|
34 |
+
This repo contains GGML format model files for [ConceptofMind's LLongMA 2 7B](https://huggingface.co/conceptofmind/LLongMA-2-7b).
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
+
### Important note regarding GGML files.
|
37 |
|
38 |
+
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
39 |
|
40 |
+
Please use the GGUF models instead.
|
41 |
+
### About GGML
|
|
|
|
|
42 |
|
43 |
+
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
44 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
|
45 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
|
46 |
+
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
|
47 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
|
48 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
49 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
50 |
|
51 |
## Repositories available
|
52 |
|
53 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLongMA-2-7B-GPTQ)
|
54 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLongMA-2-7B-GGUF)
|
55 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML)
|
56 |
+
* [Enrico Shippole's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/conceptofmind/LLongMA-2-7b)
|
57 |
|
58 |
## Prompt template: None
|
59 |
|
60 |
```
|
61 |
{prompt}
|
62 |
+
|
63 |
```
|
64 |
|
65 |
<!-- compatibility_ggml start -->
|
66 |
## Compatibility
|
67 |
|
68 |
+
These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
|
69 |
|
70 |
+
For support with latest llama.cpp, please use GGUF files instead.
|
71 |
|
72 |
+
The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
|
73 |
|
74 |
+
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
|
|
|
|
|
75 |
|
76 |
## Explanation of the new k-quant methods
|
77 |
<details>
|
|
|
90 |
<!-- compatibility_ggml end -->
|
91 |
|
92 |
## Provided files
|
93 |
+
|
94 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
95 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
96 |
+
| [llongma-2-7b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q2_K.bin) | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
97 |
+
| [llongma-2-7b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
98 |
+
| [llongma-2-7b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
99 |
+
| [llongma-2-7b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
100 |
+
| [llongma-2-7b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
|
101 |
+
| [llongma-2-7b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
102 |
+
| [llongma-2-7b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
103 |
+
| [llongma-2-7b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
104 |
+
| [llongma-2-7b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
105 |
+
| [llongma-2-7b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
106 |
+
| [llongma-2-7b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
107 |
+
| [llongma-2-7b.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
108 |
+
| [llongma-2-7b.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q6_K.bin) | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
109 |
+
| [llongma-2-7b.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/LLongMA-2-7B-GGML/blob/main/llongma-2-7b.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
110 |
|
111 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
112 |
|
113 |
## How to run in `llama.cpp`
|
114 |
|
115 |
+
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
|
116 |
+
|
117 |
+
For compatibility with latest llama.cpp, please use GGUF files instead.
|
118 |
|
119 |
```
|
120 |
+
./main -t 10 -ngl 32 -m llongma-2-7b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Write a story about llamas"
|
121 |
```
|
122 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
123 |
|
124 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
125 |
|
126 |
+
Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
|
127 |
+
|
128 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
129 |
|
130 |
+
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
131 |
+
|
132 |
## How to run in `text-generation-webui`
|
133 |
|
134 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
|
135 |
|
136 |
<!-- footer start -->
|
137 |
+
<!-- 200823 -->
|
138 |
## Discord
|
139 |
|
140 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
154 |
* Patreon: https://patreon.com/TheBlokeAI
|
155 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
156 |
|
157 |
+
**Special thanks to**: Aemon Algiz.
|
158 |
|
159 |
+
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
|
160 |
|
161 |
|
162 |
Thank you to all my generous patrons and donaters!
|
163 |
|
164 |
+
And thank you again to a16z for their generous grant.
|
165 |
+
|
166 |
<!-- footer end -->
|
167 |
|
168 |
# Original model card: ConceptofMind's LLongMA 2 7B
|
169 |
|
170 |
+
|
171 |
LLongMA-2, a suite of Llama-2 models, trained at 8k context length using linear positional interpolation scaling. The model was trained in collaboration with Emozilla of NousResearch and Kaiokendev.
|
172 |
|
173 |
We worked directly with Kaiokendev, to extend the context length of the Llama-2 7b model through fine-tuning. The models pass all our evaluations and maintain the same perplexity at 8k extrapolation surpassing the performance of other recent methodologies.
|
|
|
207 |
All of the models can be found on Huggingface: https://huggingface.co/conceptofmind
|
208 |
|
209 |
You can find the Llama-2 usage policy here: https://ai.meta.com/llama/use-policy/
|
210 |
+
|
211 |
+
Llama 2 Community License Agreement
|
212 |
+
|
213 |
+
Llama 2 Version Release Date: July 18, 2023
|
214 |
+
|
215 |
+
“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
|
216 |
+
|
217 |
+
“Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
|
218 |
+
|
219 |
+
“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
|
220 |
+
|
221 |
+
“Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.
|
222 |
+
|
223 |
+
“Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement.
|
224 |
+
|
225 |
+
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
|
226 |
+
|
227 |
+
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
|
228 |
+
|
229 |
+
1. License Rights and Redistribution.
|
230 |
+
|
231 |
+
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
|
232 |
+
|
233 |
+
b. Redistribution and Use.
|
234 |
+
|
235 |
+
i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
|
236 |
+
|
237 |
+
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
|
238 |
+
|
239 |
+
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”
|
240 |
+
|
241 |
+
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement.
|
242 |
+
|
243 |
+
v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).
|
244 |
+
|
245 |
+
2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
|
246 |
+
|
247 |
+
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
|
248 |
+
|
249 |
+
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
|
250 |
+
|
251 |
+
5. Intellectual Property.
|
252 |
+
|
253 |
+
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials.
|
254 |
+
|
255 |
+
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
|
256 |
+
|
257 |
+
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
|
258 |
+
|
259 |
+
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
|
260 |
+
|
261 |
+
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
|