raffr
commited on
Commit
•
fdbe513
0
Parent(s):
Duplicate from localmodels/LLM
Browse files- .gitattributes +34 -0
- README.md +36 -0
- llama-13b.ggmlv3.q2_K.bin +3 -0
- llama-13b.ggmlv3.q3_K_L.bin +3 -0
- llama-13b.ggmlv3.q3_K_M.bin +3 -0
- llama-13b.ggmlv3.q3_K_S.bin +3 -0
- llama-13b.ggmlv3.q4_0.bin +3 -0
- llama-13b.ggmlv3.q4_1.bin +3 -0
- llama-13b.ggmlv3.q4_K_M.bin +3 -0
- llama-13b.ggmlv3.q4_K_S.bin +3 -0
- llama-13b.ggmlv3.q5_0.bin +3 -0
- llama-13b.ggmlv3.q5_1.bin +3 -0
- llama-13b.ggmlv3.q5_K_M.bin +3 -0
- llama-13b.ggmlv3.q5_K_S.bin +3 -0
- llama-13b.ggmlv3.q6_K.bin +3 -0
- llama-13b.ggmlv3.q8_0.bin +3 -0
.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
duplicated_from: localmodels/LLM
|
3 |
+
---
|
4 |
+
# LLaMA 13B ggml
|
5 |
+
|
6 |
+
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
11 |
+
|
12 |
+
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
|
13 |
+
|
14 |
+
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
15 |
+
|
16 |
+
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## Provided files
|
21 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
22 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
23 |
+
| llama-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB| 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
24 |
+
| llama-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB| 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
25 |
+
| llama-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB| 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
26 |
+
| llama-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB| 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
27 |
+
| llama-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
|
28 |
+
| llama-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
29 |
+
| llama-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB| 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
30 |
+
| llama-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB| 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
31 |
+
| llama-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
32 |
+
| llama-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
33 |
+
| llama-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB| 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
34 |
+
| llama-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB| 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
35 |
+
| llama-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
36 |
+
| llama-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
llama-13b.ggmlv3.q2_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:197d1ae91304925d3751e0e14ab5ed2e2aaa3e9ff41a229e6c057e26f67df94a
|
3 |
+
size 5427881088
|
llama-13b.ggmlv3.q3_K_L.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0508b5948968c06ecd623758232cd4223d28ed5be60f3a90b985c88fabaa5eb5
|
3 |
+
size 6865269888
|
llama-13b.ggmlv3.q3_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:591285d3f70f581464960b6d8e5be2f775fd32726e204f07c8eeb353c11c81e2
|
3 |
+
size 6249231488
|
llama-13b.ggmlv3.q3_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9834f27b41ba9dfc8cb3018359fa779330a2f168ac1085d6704fe6b04ce84e1b
|
3 |
+
size 5594690688
|
llama-13b.ggmlv3.q4_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fad169e6f0f575402cf75945961cb4a8ecd824ba4da6be2af831f320c4348fa5
|
3 |
+
size 7323305088
|
llama-13b.ggmlv3.q4_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56a3a18060251210796362a5c4e3e199cb46b8c3b481ae9389b0ca717c498cb4
|
3 |
+
size 8136770688
|
llama-13b.ggmlv3.q4_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:436d0bad831791ef7e2d4289afe2205a609004c18499879dd914ef6c02edf756
|
3 |
+
size 7823426688
|
llama-13b.ggmlv3.q4_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1fef214a15d9e5bcf976b5885b35b8beeb5e58fe347767792a9ffb512f8521d
|
3 |
+
size 7323305088
|
llama-13b.ggmlv3.q5_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11c14c64ec3476bda003ef60f2ba8bf223082f9a0cf66e94cd8cecd76dc96da8
|
3 |
+
size 8950236288
|
llama-13b.ggmlv3.q5_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11ee79f6009ecdb157ac7f9801aa8c77f857fc9819ec4c00c0164507a79a7117
|
3 |
+
size 9763701888
|
llama-13b.ggmlv3.q5_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:875ea41b2943d5f84d69a0a3df1c126d941028ad82ca3fc80a8b8ab2e4191d6c
|
3 |
+
size 9207874688
|
llama-13b.ggmlv3.q5_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c1e537ce12b4edf29380f2ed271476d4d7f99adbc3802b76ee6c4b4455e6165e
|
3 |
+
size 8950236288
|
llama-13b.ggmlv3.q6_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0b95a4f91fa07c77e166186ed74d829df64c4a963f4a94ab50145201456fff28
|
3 |
+
size 10678850688
|
llama-13b.ggmlv3.q8_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:15d424ebd8e9f3355bc5dcef5da36856ac9672caefdb64e1b0fe3da424c3f08c
|
3 |
+
size 13831029888
|