legraphista commited on
Commit
8066183
β€’
1 Parent(s): f917518

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: CohereForAI/aya-23-8B
3
+ inference: false
4
+ language:
5
+ - en
6
+ - fr
7
+ - de
8
+ - es
9
+ - it
10
+ - pt
11
+ - ja
12
+ - ko
13
+ - zh
14
+ - ar
15
+ - el
16
+ - fa
17
+ - pl
18
+ - id
19
+ - cs
20
+ - he
21
+ - hi
22
+ - nl
23
+ - ro
24
+ - ru
25
+ - tr
26
+ - uk
27
+ - vi
28
+ library_name: gguf
29
+ license: cc-by-nc-4.0
30
+ pipeline_tag: text-generation
31
+ quantized_by: legraphista
32
+ tags:
33
+ - quantized
34
+ - GGUF
35
+ - imatrix
36
+ - quantization
37
+ ---
38
+
39
+ # aya-23-8B-IMat-GGUF
40
+ _Llama.cpp imatrix quantization of aya-23-8B-IMat-GGUF_
41
+
42
+ Original Model: [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
43
+ Original dtype: `FP16` (`float16`)
44
+ Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998)
45
+ IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
46
+
47
+ ## Files
48
+
49
+ ### IMatrix
50
+ Status: ⏳ Processing
51
+ Link: [here](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/imatrix.dat)
52
+
53
+ ### Common Quants
54
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
55
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
56
+ | aya-23-8B.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ No | -
57
+ | aya-23-8B.Q6_K | Q6_K | - | ⏳ Processing | βšͺ No | -
58
+ | aya-23-8B.Q4_K | Q4_K | - | ⏳ Processing | 🟒 Yes | -
59
+ | aya-23-8B.Q3_K | Q3_K | - | ⏳ Processing | 🟒 Yes | -
60
+ | aya-23-8B.Q2_K | Q2_K | - | ⏳ Processing | 🟒 Yes | -
61
+
62
+
63
+ ### All Quants
64
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
65
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
66
+ | aya-23-8B.FP16 | F16 | - | ⏳ Processing | βšͺ No | -
67
+ | aya-23-8B.Q5_K | Q5_K | - | ⏳ Processing | βšͺ No | -
68
+ | aya-23-8B.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ No | -
69
+ | aya-23-8B.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 Yes | -
70
+ | aya-23-8B.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 Yes | -
71
+ | aya-23-8B.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 Yes | -
72
+ | aya-23-8B.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 Yes | -
73
+ | aya-23-8B.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 Yes | -
74
+ | aya-23-8B.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 Yes | -
75
+ | aya-23-8B.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 Yes | -
76
+ | aya-23-8B.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 Yes | -
77
+ | aya-23-8B.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 Yes | -
78
+ | aya-23-8B.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 Yes | -
79
+ | aya-23-8B.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 Yes | -
80
+ | aya-23-8B.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 Yes | -
81
+ | aya-23-8B.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 Yes | -
82
+ | aya-23-8B.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 Yes | -
83
+ | aya-23-8B.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 Yes | -
84
+ | aya-23-8B.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 Yes | -
85
+
86
+
87
+ ## Downloading using huggingface-cli
88
+ First, make sure you have hugginface-cli installed:
89
+ ```
90
+ pip install -U "huggingface_hub[cli]"
91
+ ```
92
+ Then, you can target the specific file you want:
93
+ ```
94
+ huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0.gguf" --local-dir ./
95
+ ```
96
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
97
+ ```
98
+ huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0/*" --local-dir aya-23-8B.Q8_0
99
+ # see FAQ for merging GGUF's
100
+ ```
101
+
102
+ ## FAQ
103
+
104
+ ### Why is the IMatrix not applied everywhere?
105
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
106
+
107
+ ### How do I merge a split GGUF?
108
+ 1. Make sure you have `gguf-split` available
109
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
110
+ - Download the appropriate zip for your system from the latest release
111
+ - Unzip the archive and you should be able to find `gguf-split`
112
+ 2. Locate your GGUF chunks folder (ex: `aya-23-8B.Q8_0`)
113
+ 3. Run `gguf-split --merge aya-23-8B.Q8_0/aya-23-8B.Q8_0-00001-of-XXXXX.gguf aya-23-8B.Q8_0.gguf`
114
+ - Make sure to point `gguf-split` to the first chunk of the split.
115
+
116
+ ---
117
+
118
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!