v2ray commited on
Commit
8719866
·
1 Parent(s): 62d3ed0

Fixed README.md and uploaded the conversion script.

Browse files
Files changed (2) hide show
  1. README.md +2 -7
  2. convert.py +278 -0
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - moe
11
  ---
12
  # Model Card for Mixtral-8x22B
13
- Converted to HuggingFace Transformers format using the script [here]().
14
 
15
  The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
16
  ## Run the model
@@ -28,13 +28,9 @@ inputs = tokenizer(text, return_tensors="pt")
28
  outputs = model.generate(**inputs, max_new_tokens=20)
29
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
30
  ```
31
-
32
  By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
33
-
34
  ### In half-precision
35
-
36
  Note `float16` precision only works on GPU devices
37
-
38
  <details>
39
  <summary> Click to expand </summary>
40
 
@@ -56,7 +52,6 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
56
  </details>
57
 
58
  ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
59
-
60
  <details>
61
  <summary> Click to expand </summary>
62
 
@@ -78,7 +73,6 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
78
  </details>
79
 
80
  ### Load the model with Flash Attention 2
81
-
82
  <details>
83
  <summary> Click to expand </summary>
84
 
@@ -98,6 +92,7 @@ outputs = model.generate(**inputs, max_new_tokens=20)
98
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
99
  ```
100
  </details>
 
101
  ## Notice
102
  Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
103
  # The Mistral AI Team
 
10
  - moe
11
  ---
12
  # Model Card for Mixtral-8x22B
13
+ Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py).
14
 
15
  The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
16
  ## Run the model
 
28
  outputs = model.generate(**inputs, max_new_tokens=20)
29
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
30
  ```
 
31
  By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
 
32
  ### In half-precision
 
33
  Note `float16` precision only works on GPU devices
 
34
  <details>
35
  <summary> Click to expand </summary>
36
 
 
52
  </details>
53
 
54
  ### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
 
55
  <details>
56
  <summary> Click to expand </summary>
57
 
 
73
  </details>
74
 
75
  ### Load the model with Flash Attention 2
 
76
  <details>
77
  <summary> Click to expand </summary>
78
 
 
92
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
93
  ```
94
  </details>
95
+
96
  ## Notice
97
  Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
98
  # The Mistral AI Team
convert.py ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 Mistral AI and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ import argparse
15
+ import json
16
+ import os
17
+
18
+ import torch
19
+ from safetensors.torch import load_file
20
+
21
+ from transformers import (
22
+ MixtralConfig,
23
+ MixtralForCausalLM,
24
+ )
25
+
26
+ """
27
+ Sample usage:
28
+
29
+ ```
30
+ python src/transformers/models/mixtral/convert_mixtral_weights_to_hf.py \
31
+ --input_dir /path/to/downloaded/mixtral/weights --model_size 7B --output_dir /output/path
32
+ ```
33
+
34
+ Thereafter, models can be loaded via:
35
+
36
+ ```py
37
+ from transformers import MixtralForCausalLM
38
+
39
+ model = MixtralForCausalLM.from_pretrained("/output/path")
40
+ ```
41
+
42
+ Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
43
+ come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
44
+ """
45
+
46
+ def compute_intermediate_size(n, ffn_dim_multiplier=1, multiple_of=256):
47
+ return multiple_of * ((int(ffn_dim_multiplier * int(8 * n / 3)) + multiple_of - 1) // multiple_of)
48
+
49
+ def read_json(path):
50
+ with open(path, "r") as f:
51
+ return json.load(f)
52
+
53
+ def write_json(text, path):
54
+ with open(path, "w") as f:
55
+ json.dump(text, f)
56
+
57
+ def write_model(model_path, input_base_path, model_size, safe_serialization=True):
58
+ os.makedirs(model_path, exist_ok=True)
59
+
60
+ params = read_json(os.path.join(input_base_path, "params.json"))
61
+ num_shards = 1
62
+
63
+ # For some reason this is a string in the params.json
64
+ sliding_window = int(params["sliding_window"]) if "sliding_window" in params else None
65
+ base = params.get("rope_theta", 10000.0)
66
+ vocab_size = params["vocab_size"]
67
+
68
+ if model_size == "7B":
69
+ dim = params["hidden_size"]
70
+ max_position_embeddings = 4096 * 8
71
+ num_local_experts = params["num_local_experts"]
72
+ ffn_dim = params["intermediate_size"]
73
+ n_layers = params["num_hidden_layers"]
74
+ n_heads = params["num_attention_heads"]
75
+ n_heads_per_shard = n_heads // num_shards
76
+ dims_per_head = dim // n_heads
77
+ if "num_key_value_heads" in params:
78
+ num_key_value_heads = params["num_key_value_heads"] # for GQA / MQA
79
+ num_local_key_value_heads = num_key_value_heads // num_shards
80
+ key_value_dim = dims_per_head * num_local_key_value_heads
81
+ else: # compatibility with other checkpoints
82
+ num_key_value_heads = n_heads
83
+ num_local_key_value_heads = n_heads_per_shard
84
+ key_value_dim = dim
85
+ rms_norm_eps = params["rms_norm_eps"]
86
+ elif model_size == "22B":
87
+ dim = params["dim"]
88
+ max_position_embeddings = params["max_seq_len"]
89
+ num_local_experts = params["moe"]["num_experts"]
90
+ ffn_dim = params["hidden_dim"]
91
+ n_layers = params["n_layers"]
92
+ n_heads = params["n_heads"]
93
+ n_heads_per_shard = n_heads // num_shards
94
+ dims_per_head = dim // n_heads
95
+ if "n_kv_heads" in params:
96
+ num_key_value_heads = params["n_kv_heads"] # for GQA / MQA
97
+ num_local_key_value_heads = num_key_value_heads // num_shards
98
+ key_value_dim = dims_per_head * num_local_key_value_heads
99
+ else: # compatibility with other checkpoints
100
+ num_key_value_heads = n_heads
101
+ num_local_key_value_heads = n_heads_per_shard
102
+ key_value_dim = dim
103
+ rms_norm_eps = params["norm_eps"]
104
+ else:
105
+ raise Exception("Illegal model size:", model_size)
106
+
107
+ # permute for sliced rotary
108
+ def permute(w, n_heads=n_heads, dim1=dim, dim2=dim):
109
+ return w.view(n_heads, dim1 // n_heads // 2, 2, dim2).transpose(1, 2).reshape(dim1, dim2)
110
+
111
+ print(f"Fetching all parameters from the checkpoint at \"{input_base_path}\"...")
112
+ # Load weights
113
+ if model_size == "7B":
114
+ loaded = [
115
+ torch.load(os.path.join(input_base_path, f"consolidated.{i:02d}.pt"), map_location="cpu") for i in range(8)
116
+ ]
117
+ merged_state_dict = {}
118
+ for state_dict in loaded:
119
+ merged_state_dict.update(state_dict)
120
+ elif model_size == "22B":
121
+ merged_state_dict = load_file(os.path.join(input_base_path, "consolidated.safetensors"))
122
+ print("Parameters load finished.")
123
+
124
+ state_dict = {}
125
+ for layer_i in range(n_layers):
126
+ print(f"At layer {layer_i}...")
127
+ # Sharded
128
+ # Note that attention.w{q,k,v,o}, feed_fordward.w[1,2,3], attention_norm.weight and ffn_norm.weight share
129
+ # the same storage object, saving attention_norm and ffn_norm will save other weights too, which is
130
+ # redundant as other weights will be stitched from multiple shards. To avoid that, they are cloned.
131
+
132
+ state_dict.update(
133
+ {
134
+ f"model.layers.{layer_i}.input_layernorm.weight": merged_state_dict[
135
+ f"layers.{layer_i}.attention_norm.weight"
136
+ ].clone(),
137
+ f"model.layers.{layer_i}.post_attention_layernorm.weight": merged_state_dict[
138
+ f"layers.{layer_i}.ffn_norm.weight"
139
+ ].clone(),
140
+ }
141
+ )
142
+
143
+ state_dict[f"model.layers.{layer_i}.self_attn.q_proj.weight"] = permute(
144
+ merged_state_dict[f"layers.{layer_i}.attention.wq.weight"]
145
+ .view(n_heads_per_shard, dims_per_head, dim)
146
+ .reshape(dim, dim)
147
+ )
148
+ state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"] = permute(
149
+ merged_state_dict[f"layers.{layer_i}.attention.wk.weight"]
150
+ .view(num_local_key_value_heads, dims_per_head, dim)
151
+ .reshape(key_value_dim, dim),
152
+ num_key_value_heads,
153
+ key_value_dim,
154
+ dim,
155
+ )
156
+ state_dict[f"model.layers.{layer_i}.self_attn.v_proj.weight"] = (
157
+ merged_state_dict[f"layers.{layer_i}.attention.wv.weight"]
158
+ .view(num_local_key_value_heads, dims_per_head, dim)
159
+ .reshape(key_value_dim, dim)
160
+ )
161
+
162
+ state_dict[f"model.layers.{layer_i}.self_attn.o_proj.weight"] = merged_state_dict[
163
+ f"layers.{layer_i}.attention.wo.weight"
164
+ ]
165
+
166
+ if model_size == "7B":
167
+ w1 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w1"]
168
+ w2 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w2"]
169
+ w3 = merged_state_dict[f"layers.{layer_i}.block_sparse_moe.w3"]
170
+
171
+ experts_w1 = [
172
+ w1[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
173
+ for expert_idx in range(num_local_experts)
174
+ ]
175
+
176
+ for idx, expert_block in enumerate(experts_w1):
177
+ expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w1"
178
+ state_dict[expert_key + ".weight"] = expert_block.clone()
179
+
180
+ experts_w2 = [
181
+ w2[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
182
+ for expert_idx in range(num_local_experts)
183
+ ]
184
+
185
+ for idx, expert_block in enumerate(experts_w2):
186
+ expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w2"
187
+ state_dict[expert_key + ".weight"] = expert_block.T.clone().contiguous()
188
+
189
+ experts_w3 = [
190
+ w3[ffn_dim * expert_idx : ffn_dim * (expert_idx + 1), :].contiguous().clone()
191
+ for expert_idx in range(num_local_experts)
192
+ ]
193
+
194
+ for idx, expert_block in enumerate(experts_w3):
195
+ expert_key = f"model.layers.{layer_i}.block_sparse_moe.experts.{idx}.w3"
196
+ state_dict[expert_key + ".weight"] = expert_block.clone()
197
+
198
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.gate.weight"] = merged_state_dict[
199
+ f"layers.{layer_i}.block_sparse_moe.gate.weight"
200
+ ]
201
+ elif model_size == "22B":
202
+ for expert_i in range(num_local_experts):
203
+ w1 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w1.weight"]
204
+ w2 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w2.weight"]
205
+ w3 = merged_state_dict[f"layers.{layer_i}.feed_forward.experts.{expert_i}.w3.weight"]
206
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w1.weight"] = w1.contiguous().clone()
207
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w2.weight"] = w2.contiguous().clone()
208
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.experts.{expert_i}.w3.weight"] = w3.contiguous().clone()
209
+ state_dict[f"model.layers.{layer_i}.block_sparse_moe.gate.weight"] = merged_state_dict[
210
+ f"layers.{layer_i}.feed_forward.gate.weight"
211
+ ]
212
+
213
+ state_dict.update(
214
+ {
215
+ "model.norm.weight": merged_state_dict["norm.weight"],
216
+ "model.embed_tokens.weight": merged_state_dict["tok_embeddings.weight"],
217
+ "lm_head.weight": merged_state_dict["output.weight"],
218
+ }
219
+ )
220
+
221
+ config_additional_kwargs = {}
222
+ if model_size == "22B":
223
+ config_additional_kwargs["num_experts_per_tok"] = params["moe"]["num_experts_per_tok"]
224
+ config = MixtralConfig(
225
+ hidden_size=dim,
226
+ intermediate_size=ffn_dim,
227
+ num_attention_heads=n_heads,
228
+ num_hidden_layers=n_layers,
229
+ rms_norm_eps=rms_norm_eps,
230
+ num_key_value_heads=num_key_value_heads,
231
+ vocab_size=vocab_size,
232
+ rope_theta=base,
233
+ max_position_embeddings=max_position_embeddings,
234
+ sliding_window=sliding_window,
235
+ num_local_experts=num_local_experts,
236
+ **config_additional_kwargs
237
+ )
238
+
239
+ print("Loading the checkpoint in a Mixtral model.")
240
+ with torch.device("meta"):
241
+ model = MixtralForCausalLM(config)
242
+ # Avoid saving this as part of the config.
243
+ del model.config._name_or_path
244
+ model.config.torch_dtype = torch.bfloat16
245
+ print("Saving in the Transformers format.")
246
+
247
+ model.load_state_dict(state_dict, strict=True, assign=True)
248
+
249
+ for n, p in model.named_parameters():
250
+ assert p.device.type != "meta", f"{n} has not been loaded!"
251
+
252
+ model.save_pretrained(model_path, safe_serialization=safe_serialization)
253
+
254
+ def main():
255
+ parser = argparse.ArgumentParser()
256
+ parser.add_argument(
257
+ "--input-dir",
258
+ help="Location of Mixtral weights, which contains tokenizer.model and model folders",
259
+ required=True,
260
+ )
261
+ parser.add_argument(
262
+ "--model-size",
263
+ choices=["7B", "22B"],
264
+ help="'f' models correspond to the finetuned versions, and are specific to the Mixtral official release. For more details on Mixtral, checkout the original repo: https://huggingface.co/mistral-ai",
265
+ default="7B",
266
+ )
267
+ parser.add_argument("--output-dir", help="Location to write HF model", required=True)
268
+ parser.add_argument("--safe-serialization", type=bool, default=True, help="Whether or not to save using `safetensors`.")
269
+ args = parser.parse_args()
270
+ write_model(
271
+ model_path=args.output_dir,
272
+ input_base_path=args.input_dir,
273
+ model_size=args.model_size,
274
+ safe_serialization=args.safe_serialization,
275
+ )
276
+
277
+ if __name__ == "__main__":
278
+ main()