nm-research commited on
Commit
4280e50
1 Parent(s): 43f7fd6

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +186 -0
  2. config.json +68 -0
  3. generation_config.json +9 -0
  4. model-00001-of-00048.safetensors +3 -0
  5. model-00002-of-00048.safetensors +3 -0
  6. model-00003-of-00048.safetensors +3 -0
  7. model-00004-of-00048.safetensors +3 -0
  8. model-00005-of-00048.safetensors +3 -0
  9. model-00006-of-00048.safetensors +3 -0
  10. model-00007-of-00048.safetensors +3 -0
  11. model-00008-of-00048.safetensors +3 -0
  12. model-00009-of-00048.safetensors +3 -0
  13. model-00010-of-00048.safetensors +3 -0
  14. model-00011-of-00048.safetensors +3 -0
  15. model-00012-of-00048.safetensors +3 -0
  16. model-00013-of-00048.safetensors +3 -0
  17. model-00014-of-00048.safetensors +3 -0
  18. model-00015-of-00048.safetensors +3 -0
  19. model-00016-of-00048.safetensors +3 -0
  20. model-00017-of-00048.safetensors +3 -0
  21. model-00018-of-00048.safetensors +3 -0
  22. model-00019-of-00048.safetensors +3 -0
  23. model-00020-of-00048.safetensors +3 -0
  24. model-00021-of-00048.safetensors +3 -0
  25. model-00022-of-00048.safetensors +3 -0
  26. model-00023-of-00048.safetensors +3 -0
  27. model-00024-of-00048.safetensors +3 -0
  28. model-00025-of-00048.safetensors +3 -0
  29. model-00026-of-00048.safetensors +3 -0
  30. model-00027-of-00048.safetensors +3 -0
  31. model-00028-of-00048.safetensors +3 -0
  32. model-00029-of-00048.safetensors +3 -0
  33. model-00030-of-00048.safetensors +3 -0
  34. model-00031-of-00048.safetensors +3 -0
  35. model-00032-of-00048.safetensors +3 -0
  36. model-00033-of-00048.safetensors +3 -0
  37. model-00034-of-00048.safetensors +3 -0
  38. model-00035-of-00048.safetensors +3 -0
  39. model-00036-of-00048.safetensors +3 -0
  40. model-00037-of-00048.safetensors +3 -0
  41. model-00038-of-00048.safetensors +3 -0
  42. model-00039-of-00048.safetensors +3 -0
  43. model-00040-of-00048.safetensors +3 -0
  44. model-00041-of-00048.safetensors +3 -0
  45. model-00042-of-00048.safetensors +3 -0
  46. model-00043-of-00048.safetensors +3 -0
  47. model-00044-of-00048.safetensors +3 -0
  48. model-00045-of-00048.safetensors +3 -0
  49. model-00046-of-00048.safetensors +3 -0
  50. model-00047-of-00048.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - vllm
5
+ license: other
6
+ license_name: deepseek-license
7
+ license_link: https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL
8
+ ---
9
+
10
+ # DeepSeek-Coder-V2-Instruct-FP8
11
+
12
+ ## Model Overview
13
+ - **Model Architecture:** DeepSeek-Coder-V2-Instruct
14
+ - **Input:** Text
15
+ - **Output:** Text
16
+ - **Model Optimizations:**
17
+ - **Weight quantization:** FP8
18
+ - **Activation quantization:** FP8
19
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3-7B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-7B-Instruct), this models is intended for assistant-like chat.
20
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
21
+ - **Release Date:** 7/22/2024
22
+ - **Version:** 1.0
23
+ - **License(s):** [deepseek-license](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL)
24
+ - **Model Developers:** Neural Magic
25
+
26
+ Quantized version of [DeepSeek-Coder-V2-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct).
27
+ <!-- It achieves an average score of 73.19 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.48. -->
28
+ It achieves an average score of 88.98 on the [HumanEval+](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark, whereas the unquantized model achieves 87.63.
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights and activations of [DeepSeek-Coder-V2-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
33
+ This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. In particular, this model can now be loaded and evaluated with only 4xH100 GPUs, as opposed to 8.
34
+
35
+ Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
36
+ [AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with 512 sequences of UltraChat.
37
+
38
+ ## Deployment
39
+
40
+ ### Use with vLLM
41
+
42
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
43
+
44
+ ```python
45
+ from transformers import AutoTokenizer
46
+ from vllm import LLM, SamplingParams
47
+
48
+ max_model_len, tp_size = 4096, 4
49
+ model_name = "neuralmagic/DeepSeek-Coder-V2-Instruct-FP8"
50
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
51
+ llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
52
+ sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
53
+
54
+ messages_list = [
55
+ [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
56
+ ]
57
+
58
+ prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
59
+
60
+ outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
61
+
62
+ generated_text = [output.outputs[0].text for output in outputs]
63
+ print(generated_text)
64
+ ```
65
+
66
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
67
+
68
+ ## Creation
69
+
70
+ This model was created by applying [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py) with expert gates kept at original precision, as presented in the code snipet below.
71
+ Notably, a custom device map had to be used, as the model was being incorrectly loaded otherwise.
72
+ Although AutoFP8 was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoFP8.
73
+
74
+ ```python
75
+ from datasets import load_dataset
76
+ from transformers import AutoTokenizer
77
+
78
+ from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
79
+
80
+ pretrained_model_dir = "deepseek-ai/DeepSeek-Coder-V2-Instruct"
81
+ quantized_model_dir = "DeepSeek-Coder-V2-Instruct-FP8"
82
+
83
+ tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=4096)
84
+ tokenizer.pad_token = tokenizer.eos_token
85
+
86
+ ds = load_dataset("mgoin/ultrachat_2k", split="train_sft").select(range(512))
87
+ examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds]
88
+ examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt").to("cuda")
89
+
90
+ quantize_config = BaseQuantizeConfig(
91
+ quant_method="fp8",
92
+ activation_scheme="static"
93
+ ignore_patterns=["re:.*lm_head"],
94
+ )
95
+
96
+ device_map = {
97
+ "model.embed_tokens": 0,
98
+ "model.layers.0": 0,
99
+ }
100
+ for i in range(1, 60):
101
+ device_map[f"model.layers.{i}"] = i//8
102
+
103
+ device_map["model.norm"] = 7
104
+ device_map["lm_head"] = 7
105
+
106
+ model = AutoFP8ForCausalLM.from_pretrained(
107
+ pretrained_model_dir, quantize_config=quantize_config, device_map = device_map
108
+ )
109
+ model.quantize(examples)
110
+ model.save_quantized(quantized_model_dir)
111
+ ```
112
+
113
+ ## Evaluation
114
+
115
+ The model was evaluated on the [HumanEval+](https://github.com/openai/human-eval?tab=readme-ov-file) benchmark with the [Neural Magic fork](https://github.com/neuralmagic/evalplus) of the [EvalPlus implementation of HumanEval+](https://github.com/evalplus/evalplus) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
116
+ ```
117
+ python codegen/generate.py --model neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 --temperature 0.2 --n_samples 50 --resume --root ~ --dataset humaneval
118
+ python evalplus/sanitize.py ~/humaneval/neuralmagic--DeepSeek-Coder-V2-Instruct-FP8_vllm_temp_0.2
119
+ evalplus.evaluate --dataset humaneval --samples ~/humaneval/neuralmagic--DeepSeek-Coder-V2-Instruct-FP8_vllm_temp_0.2-sanitized
120
+ ```
121
+
122
+ ### Accuracy
123
+
124
+ #### HumanEval+ evaluation scores
125
+ <table>
126
+ <tr>
127
+ <td><strong>Benchmark</strong>
128
+ </td>
129
+ <td><strong>DeepSeek-Coder-V2-Instruct</strong>
130
+ </td>
131
+ <td><strong>DeepSeek-Coder-V2-Instruct-FP8(this model)</strong>
132
+ </td>
133
+ <td><strong>Recovery</strong>
134
+ </td>
135
+ </tr>
136
+ <tr>
137
+ <td>base pass@1
138
+ </td>
139
+ <td>88.2
140
+ </td>
141
+ <td>87.6
142
+ </td>
143
+ <td>99.32%
144
+ </td>
145
+ </tr>
146
+ <tr>
147
+ <td>base pass@10
148
+ </td>
149
+ <td>92.3
150
+ </td>
151
+ <td>94.7
152
+ </td>
153
+ <td>102.60%
154
+ </td>
155
+ </tr>
156
+ <tr>
157
+ <td>base+extra pass@1
158
+ </td>
159
+ <td>83.3
160
+ </td>
161
+ <td>83.2
162
+ </td>
163
+ <td>99.88%
164
+ </td>
165
+ </tr>
166
+ <tr>
167
+ <td>base+extra pass@10
168
+ </td>
169
+ <td>86.7
170
+ </td>
171
+ <td>90.4
172
+ </td>
173
+ <td>104.27%
174
+ </td>
175
+ </tr>
176
+ <tr>
177
+ <td><strong>Average</strong>
178
+ </td>
179
+ <td><strong>87.63</strong>
180
+ </td>
181
+ <td><strong>88.98</strong>
182
+ </td>
183
+ <td><strong>101.5%</strong>
184
+ </td>
185
+ </tr>
186
+ </table>
config.json ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "deepseek-ai/DeepSeek-Coder-V2-Instruct",
3
+ "architectures": [
4
+ "DeepseekV2ForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "deepseek-ai/DeepSeek-Coder-V2-Instruct--configuration_deepseek.DeepseekV2Config",
10
+ "AutoModel": "deepseek-ai/DeepSeek-Coder-V2-Instruct--modeling_deepseek.DeepseekV2Model",
11
+ "AutoModelForCausalLM": "deepseek-ai/DeepSeek-Coder-V2-Instruct--modeling_deepseek.DeepseekV2ForCausalLM"
12
+ },
13
+ "aux_loss_alpha": 0.001,
14
+ "bos_token_id": 100000,
15
+ "eos_token_id": 100001,
16
+ "ep_size": 1,
17
+ "first_k_dense_replace": 1,
18
+ "hidden_act": "silu",
19
+ "hidden_size": 5120,
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 12288,
22
+ "kv_lora_rank": 512,
23
+ "max_position_embeddings": 163840,
24
+ "model_type": "deepseek_v2",
25
+ "moe_intermediate_size": 1536,
26
+ "moe_layer_freq": 1,
27
+ "n_group": 8,
28
+ "n_routed_experts": 160,
29
+ "n_shared_experts": 2,
30
+ "norm_topk_prob": false,
31
+ "num_attention_heads": 128,
32
+ "num_experts_per_tok": 6,
33
+ "num_hidden_layers": 60,
34
+ "num_key_value_heads": 128,
35
+ "pretraining_tp": 1,
36
+ "q_lora_rank": 1536,
37
+ "qk_nope_head_dim": 128,
38
+ "qk_rope_head_dim": 64,
39
+ "quantization_config": {
40
+ "activation_scheme": "static",
41
+ "ignored_layers": [
42
+ "lm_head"
43
+ ],
44
+ "quant_method": "fp8"
45
+ },
46
+ "rms_norm_eps": 1e-06,
47
+ "rope_scaling": {
48
+ "beta_fast": 32,
49
+ "beta_slow": 1,
50
+ "factor": 40,
51
+ "mscale": 1.0,
52
+ "mscale_all_dim": 1.0,
53
+ "original_max_position_embeddings": 4096,
54
+ "type": "yarn"
55
+ },
56
+ "rope_theta": 10000,
57
+ "routed_scaling_factor": 16.0,
58
+ "scoring_func": "softmax",
59
+ "seq_aux": true,
60
+ "tie_word_embeddings": false,
61
+ "topk_group": 3,
62
+ "topk_method": "group_limited_greedy",
63
+ "torch_dtype": "bfloat16",
64
+ "transformers_version": "4.42.4",
65
+ "use_cache": true,
66
+ "v_head_dim": 128,
67
+ "vocab_size": 102400
68
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 100000,
4
+ "do_sample": true,
5
+ "eos_token_id": 100001,
6
+ "temperature": 0.3,
7
+ "top_p": 0.95,
8
+ "transformers_version": "4.42.4"
9
+ }
model-00001-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8000e9949b4fd962ddbbb4eb3557eb0d04df40316ca4158b83cc34689890e631
3
+ size 4996259528
model-00002-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afc58e048af69b21ccfaf196807266d68ba44616934aa74fe42f1571b2b98ed9
3
+ size 4996987640
model-00003-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64fd5150522ae8c39273c109536d51d1f62857699942e07db90829b4e7d8c780
3
+ size 4995526672
model-00004-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ade4c7a3483b6905e30117258a396367564951c1036795a4aeaa03358c46290
3
+ size 4995526744
model-00005-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97e620157534c9850258ce5fb5418f9462e04a3354211238468e38a6cba2adeb
3
+ size 4995527056
model-00006-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5d847d51f72bfa006b33a96ba231b3dfd0f19104fc66bb4968cb05743ea6d07
3
+ size 4996987592
model-00007-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddb0e51292e95d1a59ad4aefcef962147c87802421fcf10a551b4afa898e00b5
3
+ size 4995526672
model-00008-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a73e074ca5b57ccac8d2ff37b3b0f2457692feba684a46413b4e4711a1cd9aa
3
+ size 4995527816
model-00009-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e66ca1b356a24133cba9cbbd61044e1388814faf8b5c9ec4f8ce01f070d59bff
3
+ size 4995528912
model-00010-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e13ec513877839ff246492556f8b077704d8cbdba49d245d0abc00a5184927ca
3
+ size 4996989368
model-00011-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02a1f43fa8c8f14a41b62a107a1ca40ce40a3558a71198416d00a460cb6b8b4c
3
+ size 4995528528
model-00012-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ceeeeebdbf0333b112f8a346442da49312084c62a72f520765aa88c315749b7
3
+ size 4995528696
model-00013-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4daa7499e1465313cceaae358458f40bca747e7047fba3763a90e9998d9f695
3
+ size 4989302048
model-00014-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c7ae17d52b41f004331c54e1445127d7e3853b995fd6a982274ad0a658efd75
3
+ size 4995351504
model-00015-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dde785a9b394a2d0842add9d869a59bf53c71909b2138b398061f0bfa259e5da
3
+ size 4995528528
model-00016-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15768a4063205ab4af23981e4f09bd76733e209d92ca051dcfb94290722e50ea
3
+ size 4995528736
model-00017-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a0dac72f27ac03e8d2b90ff9fbbbb9eb66dc9624711f29d5fd858462ec5017e
3
+ size 4960291384
model-00018-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0f42c1ad70f04521ea015fad81204a8a5a2c790b3dffa801a03635fc7bf7dda
3
+ size 4992903432
model-00019-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5451aa06f5079146ef7d2fd3dc398d7424e5d6d112899a2b5d2699ec0cbc77cc
3
+ size 4995528528
model-00020-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eac491013508877af3d44a49679c5694f1fcb1ffc435304bd40f5e31995980b3
3
+ size 4995528768
model-00021-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:127dce81a6ac7b9cf80f6b9b66b272aa0ec4d08fbef32d4d0daaf2cdbc2e1ba7
3
+ size 4996989712
model-00022-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e8157545e78a9078234a5b6bc4c19b84cbee2c57d1c8aca93e1930157449589
3
+ size 4995528448
model-00023-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9682308a785e38dc5ad45eee0236afcf4c951df198ef57d18403dc9c3c6276fb
3
+ size 4995528528
model-00024-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5afc314499d2375d9557115fd0b8ed5d0be388554d25fd9c47ce08ff555b548
3
+ size 4995528816
model-00025-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd1151986e1faec0a4bdb084f8356d9bc9133659b5027e5f00d4e46bb1a678d3
3
+ size 4996989624
model-00026-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10149a53f3ab21d3ffa46a78fb50326bb65172959ad38efbf1eddb7805113eaa
3
+ size 4995528488
model-00027-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c86e9ce223632a20fdebd45f74531a4d2b255857cd79e9dc7166f21cd48cb5d6
3
+ size 4995528528
model-00028-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95e32b9b816ef5943908c7f2d6513dae5d8b2945ab874cacbaae9e9945d2b888
3
+ size 4995528856
model-00029-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3df0e4ea67306a6b593044f58496f70ae4f988fd3b4b61145d407866c1d5fe6
3
+ size 4996989544
model-00030-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:775a5d2221149e7da8132a9a2cbc7550311f4f9bf5aeeeb477d9e8b099934ac5
3
+ size 4995528528
model-00031-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06679cf3f9f2f3ed58e48ceb3d2d0116365e132b5e2d9214eb914a2f2ea5c638
3
+ size 4995528528
model-00032-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae73b4fabdced0fddbd7d9d0fae2fa432b55d4e70341b6369f2629dca5100c7f
3
+ size 4995528904
model-00033-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:171827c7f4793c100641c235cd8966cd779e11a84a4bba29b3c9270e089e0cd0
3
+ size 4996989504
model-00034-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35cb341b2f86495fd8c8dc67c9e7705efc8ede904fecc61167003c9a6d532329
3
+ size 4995528528
model-00035-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e0e5e76f179390df5276a495a0cd07a840d0b65e92942bef511b75984cd7acf
3
+ size 4995528560
model-00036-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b721899f02e1a8261204467642c89d84c066b82fde1b954c8ffbe6094e390d6e
3
+ size 4995528920
model-00037-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b642144cd419a92f0c8aa43ab0443be6fecb698b41ac32f44d083ec7a548b0c
3
+ size 4996989456
model-00038-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:246973a6c25d2554c474343dab2b7845d7e617a3d5524734fcf504680b716c53
3
+ size 4995528528
model-00039-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f6876fcd793fe16c644034221084b4c4f7248f9c2e15edaf4abaf4845442384
3
+ size 4995528600
model-00040-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:764fe67f93eb623e33704f17ca6e215e230954472d2bf981e9e85095e09c4226
3
+ size 4995528920
model-00041-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a80098037868115f9917e9e6769a4ccda8cd4e707522aa7fd2ed6e1244a04b0
3
+ size 4996989408
model-00042-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86e8a05d26e3c17e9e23f0354b28919722ac5432e45a91cb7c605235ff4e1f4f
3
+ size 4995528528
model-00043-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c98a7db0721abc363a4091a3a3101278bae647a82ca0ab968272bee24e0b3deb
3
+ size 4995528648
model-00044-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0046833ff80ef2fcb27d99c7d274205beba3bd4892a1d7217385085e5477601
3
+ size 4995528920
model-00045-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79df5fbd6b911d35cdcfb92e869b2203232c3826c19c8868f620997990e3e8b6
3
+ size 4996989368
model-00046-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c38c6a890754e4a7aad78186548a31a382d3f2f1cdaa4101cba2de3f5e7b60c1
3
+ size 4995528528
model-00047-of-00048.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c9da3ec73db485e9353ce6d2deda806716b30e63f006761eacec995fccabbe0
3
+ size 4995528696