legraphista commited on
Commit
b92292b
1 Parent(s): 317fdef

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2-57B-A14B-Instruct
3
+ inference: false
4
+ language:
5
+ - en
6
+ library_name: gguf
7
+ license: apache-2.0
8
+ pipeline_tag: text-generation
9
+ quantized_by: legraphista
10
+ tags:
11
+ - chat
12
+ - quantized
13
+ - GGUF
14
+ - quantization
15
+ - static
16
+ - 16bit
17
+ - 8bit
18
+ - 6bit
19
+ - 5bit
20
+ - 4bit
21
+ - 3bit
22
+ - 2bit
23
+ ---
24
+
25
+ # Qwen2-57B-A14B-Instruct-GGUF
26
+ _Llama.cpp static quantization of Qwen/Qwen2-57B-A14B-Instruct_
27
+
28
+ Original Model: [Qwen/Qwen2-57B-A14B-Instruct](https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct)
29
+ Original dtype: `BF16` (`bfloat16`)
30
+ Quantized by: [https://github.com/ggerganov/llama.cpp/tree/master](https://github.com/ggerganov/llama.cpp/tree/master)
31
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
32
+
33
+ - [Files](#files)
34
+ - [Common Quants](#common-quants)
35
+ - [All Quants](#all-quants)
36
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
37
+ - [Inference](#inference)
38
+ - [Simple chat template](#simple-chat-template)
39
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
40
+ - [Llama.cpp](#llama-cpp)
41
+ - [FAQ](#faq)
42
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
43
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
44
+
45
+ ---
46
+
47
+ ## Files
48
+
49
+
50
+
51
+ ### Common Quants
52
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
53
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
54
+ | Qwen2-57B-A14B-Instruct.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
55
+ | Qwen2-57B-A14B-Instruct.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
56
+ | Qwen2-57B-A14B-Instruct.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | -
57
+ | Qwen2-57B-A14B-Instruct.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
58
+ | Qwen2-57B-A14B-Instruct.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
59
+
60
+
61
+ ### All Quants
62
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
63
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
64
+ | Qwen2-57B-A14B-Instruct.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | -
65
+ | Qwen2-57B-A14B-Instruct.FP16 | F16 | - | ⏳ Processing | ⚪ Static | -
66
+ | Qwen2-57B-A14B-Instruct.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
67
+ | Qwen2-57B-A14B-Instruct.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
68
+ | Qwen2-57B-A14B-Instruct.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | -
69
+ | Qwen2-57B-A14B-Instruct.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | -
70
+ | Qwen2-57B-A14B-Instruct.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | -
71
+ | Qwen2-57B-A14B-Instruct.Q4_K_S | Q4_K_S | - | ⏳ Processing | ⚪ Static | -
72
+ | Qwen2-57B-A14B-Instruct.IQ4_NL | IQ4_NL | - | ⏳ Processing | ⚪ Static | -
73
+ | Qwen2-57B-A14B-Instruct.IQ4_XS | IQ4_XS | - | ⏳ Processing | ⚪ Static | -
74
+ | Qwen2-57B-A14B-Instruct.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
75
+ | Qwen2-57B-A14B-Instruct.Q3_K_L | Q3_K_L | - | ⏳ Processing | ⚪ Static | -
76
+ | Qwen2-57B-A14B-Instruct.Q3_K_S | Q3_K_S | - | ⏳ Processing | ⚪ Static | -
77
+ | Qwen2-57B-A14B-Instruct.IQ3_M | IQ3_M | - | ⏳ Processing | ⚪ Static | -
78
+ | Qwen2-57B-A14B-Instruct.IQ3_S | IQ3_S | - | ⏳ Processing | ⚪ Static | -
79
+ | Qwen2-57B-A14B-Instruct.IQ3_XS | IQ3_XS | - | ⏳ Processing | ⚪ Static | -
80
+ | Qwen2-57B-A14B-Instruct.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | ⚪ Static | -
81
+ | Qwen2-57B-A14B-Instruct.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
82
+ | Qwen2-57B-A14B-Instruct.IQ2_M | IQ2_M | - | ⏳ Processing | ⚪ Static | -
83
+
84
+
85
+ ## Downloading using huggingface-cli
86
+ If you do not have hugginface-cli installed:
87
+ ```
88
+ pip install -U "huggingface_hub[cli]"
89
+ ```
90
+ Download the specific file you want:
91
+ ```
92
+ huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0.gguf" --local-dir ./
93
+ ```
94
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
95
+ ```
96
+ huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0/*" --local-dir ./
97
+ # see FAQ for merging GGUF's
98
+ ```
99
+
100
+ ---
101
+
102
+ ## Inference
103
+
104
+ ### Simple chat template
105
+ ```
106
+ <|im_start|>system
107
+ You are a helpful assistant.<|im_end|>
108
+ <|im_start|>user
109
+ {user_prompt}<|im_end|>
110
+ <|im_start|>assistant
111
+ {assistant_response}<|im_end|>
112
+ <|im_start|>user
113
+ {next_user_prompt}<|im_end|>
114
+
115
+ ```
116
+
117
+ ### Chat template with system prompt
118
+ ```
119
+ <|im_start|>system
120
+ {system_prompt}<|im_end|>
121
+ <|im_start|>user
122
+ {user_prompt}<|im_end|>
123
+ <|im_start|>assistant
124
+ {assistant_response}<|im_end|>
125
+ <|im_start|>user
126
+ {next_user_prompt}<|im_end|>
127
+
128
+ ```
129
+
130
+ ### Llama.cpp
131
+ ```
132
+ llama.cpp/main -m Qwen2-57B-A14B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
133
+ ```
134
+
135
+ ---
136
+
137
+ ## FAQ
138
+
139
+ ### Why is the IMatrix not applied everywhere?
140
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
141
+
142
+ ### How do I merge a split GGUF?
143
+ 1. Make sure you have `gguf-split` available
144
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
145
+ - Download the appropriate zip for your system from the latest release
146
+ - Unzip the archive and you should be able to find `gguf-split`
147
+ 2. Locate your GGUF chunks folder (ex: `Qwen2-57B-A14B-Instruct.Q8_0`)
148
+ 3. Run `gguf-split --merge Qwen2-57B-A14B-Instruct.Q8_0/Qwen2-57B-A14B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-57B-A14B-Instruct.Q8_0.gguf`
149
+ - Make sure to point `gguf-split` to the first chunk of the split.
150
+
151
+ ---
152
+
153
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!