Shaleen123 commited on
Commit
0e7bd13
1 Parent(s): a164e13

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,187 +1,47 @@
1
- # mergekit
 
 
 
 
 
 
 
 
2
 
3
- `mergekit` is a toolkit for merging pre-trained language models. `mergekit` uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
 
4
 
5
- Features:
6
 
7
- - Supports Llama, Mistral, GPT-NeoX, StableLM, and more
8
- - Many [merge methods](#merge-methods)
9
- - GPU or CPU execution
10
- - Lazy loading of tensors for low memory use
11
- - Interpolated gradients for parameter values (inspired by Gryphe's [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script)
12
- - Piecewise assembly of language models from layers ("Frankenmerging")
13
 
14
- 🔊 Call to Evolve - to solve evolutionary merge methods as a community - please see https://github.com/arcee-ai/mergekit/issues/207.
15
 
16
- ## Installation
17
 
18
- ```sh
19
- git clone https://github.com/cg123/mergekit.git
20
- cd mergekit
 
21
 
22
- pip install -e . # install the package and make scripts available
23
- ```
24
-
25
- If the above fails with the error of:
26
-
27
- ```
28
- ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode:
29
- (A "pyproject.toml" file was found, but editable mode currently requires a setuptools-based build.)
30
- ```
31
-
32
- You may need to upgrade pip to > 21.3 with the command `python3 -m pip install --upgrade pip`
33
-
34
- ## Usage
35
-
36
- The script `mergekit-yaml` is the main entry point for `mergekit`. It takes a YAML configuration file and an output path, like so:
37
-
38
- ```sh
39
- mergekit-yaml path/to/your/config.yml ./output-model-directory [--cuda] [--lazy-unpickle] [--allow-crimes] [... other options]
40
- ```
41
-
42
- This will run the merge and write your merged model to `./output-model-directory`.
43
-
44
- For more information on the arguments accepted by `mergekit-yaml` run the command `mergekit-yaml --help`.
45
-
46
- ### Uploading to Huggingface
47
-
48
- When you have a merged model you're happy with, you may want to share it on the Hugging Face Hub. `mergekit` generates a `README.md` for your merge with some basic information for a model card. You can edit it to include more details about your merge, like giving it a good name or explaining what it's good at; rewrite it entirely; or use the generated `README.md` as-is. It is also possible to edit your `README.md` online once it has been uploaded to the Hub.
49
-
50
- Once you're happy with your model card and merged model, you can upload it to the Hugging Face Hub using the [huggingface_hub](https://huggingface.co/docs/huggingface_hub/index) Python library.
51
-
52
- ```
53
- # log in to huggingface with an access token (must have write permission)
54
- huggingface-cli login
55
- # upload your model
56
- huggingface-cli upload your_hf_username/my-cool-model ./output-model-directory .
57
- ```
58
-
59
- The [documentation](https://huggingface.co/docs/huggingface_hub/guides/cli#huggingface-cli-upload) for `huggingface_hub` goes into more detail about other options for uploading.
60
-
61
- ## Merge Configuration
62
-
63
- Merge configurations are YAML documents specifying the operations to perform in order to produce your merged model.
64
- Below are the primary elements of a configuration file:
65
-
66
- - `merge_method`: Specifies the method to use for merging models. See [Merge Methods](#merge-methods) for a list.
67
- - `slices`: Defines slices of layers from different models to be used. This field is mutually exclusive with `models`.
68
- - `models`: Defines entire models to be used for merging. This field is mutually exclusive with `slices`.
69
- - `base_model`: Specifies the base model used in some merging methods.
70
- - `parameters`: Holds various parameters such as weights and densities, which can also be specified at different levels of the configuration.
71
- - `dtype`: Specifies the data type used for the merging operation.
72
- - `tokenizer_source`: Determines how to construct a tokenizer for the merged model.
73
-
74
- ### Parameter Specification
75
-
76
- Parameters are flexible and can be set with varying precedence. They can be specified conditionally using tensor name filters, which allows finer control such as differentiating between attention heads and fully connected layers.
77
-
78
- Parameters can be specified as:
79
-
80
- - **Scalars**: Single floating-point values.
81
- - **Gradients**: List of floating-point values, specifying an interpolated gradient.
82
-
83
- The parameters can be set at different levels, with decreasing precedence as follows:
84
-
85
- 1. `slices.*.sources.parameters` - applying to a specific input slice
86
- 2. `slices.*.parameters` - applying to a specific output slice
87
- 3. `models.*.parameters` or `input_model_parameters` - applying to any tensors coming from specific input models
88
- 4. `parameters` - catchall
89
-
90
- ### Tokenizer Source
91
-
92
- The `tokenizer_source` field of a configuration file determines what tokenizer is used by the merged model. This also effects how embeddings and language model heads are merged.
93
-
94
- This functionality is still experimental and may break. Please file an issue if you encounter any issues with it.
95
-
96
- Valid values:
97
-
98
- - `base`: use the tokenizer from the base model
99
- - `union`: construct a tokenizer with all tokens from all models
100
- - `model:<model_path>`: use the tokenizer from a specific model
101
-
102
- If set, mergekit will find a mapping between each model's vocabulary and the output tokenizer. This allows models with different vocabularies or added tokens to be meaningfully merged.
103
-
104
- `tokenizer_source` is compatible with all merge methods, but when used `lm_head`/`embed_tokens` will be merged linearly. For two-model merges, the `embed_slerp` parameter can be set to `true` to use SLERP instead.
105
-
106
- If the `tokenizer_source` field is not set, mergekit will fall back to its legacy default behavior. The tokenizer for the base model (or first model in the merge, if no base model is specified) will be copied to the output directory. The parameter matrices for `lm_head`/`embed_tokens` will be truncated to the smallest size present in the merge. In _most_ cases this corresponds to using the tokenizer for the base model.
107
-
108
- ### Examples
109
-
110
- Several examples of merge configurations are available in [`examples/`](examples/).
111
-
112
- ## Merge Methods
113
-
114
- A quick overview of the currently supported merge methods:
115
-
116
- | Method | `merge_method` value | Multi-Model | Uses base model |
117
- | -------------------------------------------------------------------------------------------- | -------------------- | ----------- | --------------- |
118
- | Linear ([Model Soups](https://arxiv.org/abs/2203.05482)) | `linear` | ✅ | ❌ |
119
- | SLERP | `slerp` | ❌ | ✅ |
120
- | [Task Arithmetic](https://arxiv.org/abs/2212.04089) | `task_arithmetic` | ✅ | ✅ |
121
- | [TIES](https://arxiv.org/abs/2306.01708) | `ties` | ✅ | ✅ |
122
- | [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) | `dare_ties` | ✅ | ✅ |
123
- | [DARE](https://arxiv.org/abs/2311.03099) [Task Arithmetic](https://arxiv.org/abs/2212.04089) | `dare_linear` | ✅ | ✅ |
124
- | Passthrough | `passthrough` | ❌ | ❌ |
125
- | [Model Stock](https://arxiv.org/abs/2403.19522) | `model_stock` | ✅ | ✅ |
126
-
127
- ### Linear
128
-
129
- The classic merge method - a simple weighted average.
130
-
131
- Parameters:
132
-
133
- - `weight` - relative (or absolute if `normalize=False`) weighting of a given tensor
134
- - `normalize` - if true, the weights of all models contributing to a tensor will be normalized. Default behavior.
135
-
136
- ### SLERP
137
-
138
- Spherically interpolate the parameters of two models. One must be set as `base_model`.
139
-
140
- Parameters:
141
-
142
- - `t` - interpolation factor. At `t=0` will return `base_model`, at `t=1` will return the other one.
143
-
144
- ### [Task Arithmetic](https://arxiv.org/abs/2212.04089)
145
-
146
- Computes "task vectors" for each model by subtracting a base model. Merges the task vectors linearly and adds back the base. Works great for models that were fine tuned from a common ancestor. Also a super useful mental framework for several of the more involved merge methods.
147
-
148
- Parameters: same as [Linear](#linear)
149
-
150
- ### [TIES](https://arxiv.org/abs/2306.01708)
151
-
152
- Builds on the task arithmetic framework. Resolves interference between models by sparsifying the task vectors and applying a sign consensus algorithm. Allows you to merge a larger number of models and retain more of their strengths.
153
-
154
- Parameters: same as [Linear](#linear), plus:
155
-
156
- - `density` - fraction of weights in differences from the base model to retain
157
-
158
- ### [DARE](https://arxiv.org/abs/2311.03099)
159
-
160
- In the same vein as TIES, sparsifies task vectors to reduce interference. Differs in that DARE uses random pruning with a novel rescaling to better match performance of the original models. DARE can be used either with the sign consensus algorithm of TIES (`dare_ties`) or without (`dare_linear`).
161
-
162
- Parameters: same as [TIES](#ties) for `dare_ties`, or [Linear](#linear) for `dare_linear`
163
-
164
- ### Passthrough
165
-
166
- `passthrough` is a no-op that simply passes input tensors through unmodified. It is meant to be used for layer-stacking type merges where you have only one input model. Useful for frankenmerging.
167
-
168
- ### [Model Stock](https://arxiv.org/abs/2403.19522)
169
-
170
- Uses some neat geometric properties of fine tuned models to compute good weights for linear interpolation. Requires at least three models, including a base model.
171
-
172
- Parameters:
173
 
174
- - `filter_wise`: if true, weight calculation will be per-row rather than per-tensor. Not recommended.
175
 
176
- # Citation
177
 
178
- We now have a [paper](https://arxiv.org/abs/2403.13257) you can cite for the MergeKit library:
 
 
 
 
 
 
 
 
 
 
 
179
 
180
- ```bibtex
181
- @article{goddard2024arcee,
182
- title={Arcee's MergeKit: A Toolkit for Merging Large Language Models},
183
- author={Goddard, Charles and Siriwardhana, Shamane and Ehghaghi, Malikeh and Meyers, Luke and Karpukhin, Vlad and Benedict, Brian and McQuade, Mark and Solawetz, Jacob},
184
- journal={arXiv preprint arXiv:2403.13257},
185
- year={2024}
186
- }
187
  ```
 
1
+ ---
2
+ base_model:
3
+ - Shaleen123/phi-2-maths
4
+ - Shaleen123/phi-2-code
5
+ - Shaleen123/phi-2-4bits
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
 
11
+ ---
12
+ # merge
13
 
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
 
16
+ ## Merge Details
17
+ ### Merge Method
 
 
 
 
18
 
19
+ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
20
 
21
+ ### Models Merged
22
 
23
+ The following models were included in the merge:
24
+ * [Shaleen123/phi-2-maths](https://huggingface.co/Shaleen123/phi-2-maths)
25
+ * [Shaleen123/phi-2-code](https://huggingface.co/Shaleen123/phi-2-code)
26
+ * [Shaleen123/phi-2-4bits](https://huggingface.co/Shaleen123/phi-2-4bits)
27
 
28
+ ### Configuration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
+ The following YAML configuration was used to produce this model:
31
 
32
+ ```yaml
33
 
34
+ models:
35
+ - model: Shaleen123/phi-2-code
36
+ parameters:
37
+ weight: 0.5
38
+ - model: Shaleen123/phi-2-maths
39
+ parameters:
40
+ weight: 0.3
41
+ - model: Shaleen123/phi-2-4bits
42
+ parameters:
43
+ weight: 1.0
44
+ merge_method: linear
45
+ dtype: float16
46
 
 
 
 
 
 
 
 
47
  ```
added_tokens.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "\t\t": 50294,
3
+ "\t\t\t": 50293,
4
+ "\t\t\t\t": 50292,
5
+ "\t\t\t\t\t": 50291,
6
+ "\t\t\t\t\t\t": 50290,
7
+ "\t\t\t\t\t\t\t": 50289,
8
+ "\t\t\t\t\t\t\t\t": 50288,
9
+ "\t\t\t\t\t\t\t\t\t": 50287,
10
+ " ": 50286,
11
+ " ": 50285,
12
+ " ": 50284,
13
+ " ": 50283,
14
+ " ": 50282,
15
+ " ": 50281,
16
+ " ": 50280,
17
+ " ": 50279,
18
+ " ": 50278,
19
+ " ": 50277,
20
+ " ": 50276,
21
+ " ": 50275,
22
+ " ": 50274,
23
+ " ": 50273,
24
+ " ": 50272,
25
+ " ": 50271,
26
+ " ": 50270,
27
+ " ": 50269,
28
+ " ": 50268,
29
+ " ": 50267,
30
+ " ": 50266,
31
+ " ": 50265,
32
+ " ": 50264,
33
+ " ": 50263,
34
+ " ": 50262,
35
+ " ": 50261,
36
+ " ": 50260,
37
+ " ": 50259,
38
+ " ": 50258,
39
+ " ": 50257
40
+ }
config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Shaleen123/phi-2-maths",
3
+ "architectures": [
4
+ "PhiForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "microsoft/phi-2--configuration_phi.PhiConfig",
9
+ "AutoModelForCausalLM": "microsoft/phi-2--modeling_phi.PhiForCausalLM"
10
+ },
11
+ "bos_token_id": 50256,
12
+ "embd_pdrop": 0.0,
13
+ "eos_token_id": 50256,
14
+ "hidden_act": "gelu_new",
15
+ "hidden_size": 2560,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 10240,
18
+ "layer_norm_eps": 1e-05,
19
+ "max_position_embeddings": 2048,
20
+ "model_type": "phi",
21
+ "num_attention_heads": 32,
22
+ "num_hidden_layers": 32,
23
+ "num_key_value_heads": 32,
24
+ "partial_rotary_factor": 0.4,
25
+ "qk_layernorm": false,
26
+ "quantization_config": {
27
+ "_load_in_4bit": true,
28
+ "_load_in_8bit": false,
29
+ "bnb_4bit_compute_dtype": "float32",
30
+ "bnb_4bit_quant_type": "fp4",
31
+ "bnb_4bit_use_double_quant": false,
32
+ "llm_int8_enable_fp32_cpu_offload": false,
33
+ "llm_int8_has_fp16_weight": false,
34
+ "llm_int8_skip_modules": null,
35
+ "llm_int8_threshold": 6.0,
36
+ "load_in_4bit": true,
37
+ "load_in_8bit": false,
38
+ "quant_method": "bitsandbytes"
39
+ },
40
+ "resid_pdrop": 0.1,
41
+ "rope_scaling": null,
42
+ "rope_theta": 10000.0,
43
+ "tie_word_embeddings": false,
44
+ "torch_dtype": "float16",
45
+ "transformers_version": "4.38.2",
46
+ "use_cache": true,
47
+ "vocab_size": 51200
48
+ }
mergekit_config.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ models:
3
+ - model: Shaleen123/phi-2-code
4
+ parameters:
5
+ weight: 0.5
6
+ - model: Shaleen123/phi-2-maths
7
+ parameters:
8
+ weight: 0.3
9
+ - model: Shaleen123/phi-2-4bits
10
+ parameters:
11
+ weight: 1.0
12
+ merge_method: linear
13
+ dtype: float16
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6a858e3c69e1ba3c09c6d49a7dee2460ccd4c13b599704e608c88df6fa46c64
3
+ size 1993680248
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a42859d88e8973ad35321d724fc93498b3b74ce153db25b5e975d0c5eeb1395
3
+ size 1049154408
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.4.2", "total_size": 3042785280}, "weight_map": {"model.final_layernorm.weight": "model-00001-of-00002.safetensors", "model.final_layernorm.bias": "model-00001-of-00002.safetensors", "lm_head.weight": "model-00001-of-00002.safetensors", "lm_head.bias": "model-00001-of-00002.safetensors", "model.layers.31.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.31.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.31.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.31.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.31.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.31.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.31.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.30.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.30.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.30.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.30.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.30.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.30.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.30.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.29.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.29.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.29.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.29.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.29.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.29.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.29.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.28.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.28.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.28.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.28.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.28.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.28.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.28.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.27.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.27.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.27.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.27.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.27.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.27.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.26.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.26.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.26.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.26.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.26.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.26.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.25.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.25.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.25.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.25.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.25.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.25.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.24.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.24.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.24.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.24.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.24.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.24.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.23.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.23.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.23.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.23.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.23.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.23.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.22.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.22.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.22.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.22.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.22.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.22.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.21.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.21.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.21.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.21.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.21.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.21.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.20.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.20.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.20.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.20.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.20.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.20.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.19.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.19.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.19.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.19.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.19.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.19.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.18.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.18.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.18.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.18.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.18.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.18.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.17.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.17.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.17.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.17.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.17.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.17.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.16.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.16.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.16.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.16.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.16.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.16.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.15.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.15.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.15.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.15.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.15.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.15.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.14.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.14.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.14.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.14.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.14.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.14.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.13.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.13.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.13.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.13.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.13.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.13.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.12.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.12.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.12.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.12.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.12.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.12.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.11.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.11.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.11.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.11.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.11.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.11.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.10.mlp.fc2.weight": "model-00001-of-00002.safetensors", "model.layers.10.mlp.fc2.bias": "model-00001-of-00002.safetensors", "model.layers.10.mlp.fc1.weight": "model-00001-of-00002.safetensors", "model.layers.10.mlp.fc1.bias": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.dense.weight": "model-00001-of-00002.safetensors", "model.layers.10.self_attn.dense.bias": "model-00001-of-00002.safetensors", "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors", "model.layers.10.input_layernorm.bias": "model-00001-of-00002.safetensors", "model.layers.9.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.9.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.9.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.9.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.9.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.9.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.9.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.8.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.8.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.8.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.8.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.8.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.8.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.8.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.7.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.7.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.7.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.7.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.7.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.7.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.7.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.6.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.6.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.6.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.6.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.6.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.6.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.6.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.5.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.5.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.5.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.5.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.5.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.5.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.5.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.4.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.4.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.4.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.4.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.4.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.4.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.4.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.3.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.3.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.3.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.3.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.3.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.3.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.3.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.2.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.2.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.2.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.2.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.2.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.2.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.2.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.1.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.1.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.1.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.1.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.1.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.1.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.1.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.layers.0.mlp.fc2.weight": "model-00002-of-00002.safetensors", "model.layers.0.mlp.fc2.bias": "model-00002-of-00002.safetensors", "model.layers.0.mlp.fc1.weight": "model-00002-of-00002.safetensors", "model.layers.0.mlp.fc1.bias": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.v_proj.bias": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.k_proj.bias": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.q_proj.bias": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.dense.weight": "model-00002-of-00002.safetensors", "model.layers.0.self_attn.dense.bias": "model-00002-of-00002.safetensors", "model.layers.0.input_layernorm.weight": "model-00002-of-00002.safetensors", "model.layers.0.input_layernorm.bias": "model-00002-of-00002.safetensors", "model.embed_tokens.weight": "model-00002-of-00002.safetensors"}}
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "50256": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "50257": {
13
+ "content": " ",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": false
19
+ },
20
+ "50258": {
21
+ "content": " ",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": false
27
+ },
28
+ "50259": {
29
+ "content": " ",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": false
35
+ },
36
+ "50260": {
37
+ "content": " ",
38
+ "lstrip": false,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": false
43
+ },
44
+ "50261": {
45
+ "content": " ",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": false
51
+ },
52
+ "50262": {
53
+ "content": " ",
54
+ "lstrip": false,
55
+ "normalized": true,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": false
59
+ },
60
+ "50263": {
61
+ "content": " ",
62
+ "lstrip": false,
63
+ "normalized": true,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": false
67
+ },
68
+ "50264": {
69
+ "content": " ",
70
+ "lstrip": false,
71
+ "normalized": true,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": false
75
+ },
76
+ "50265": {
77
+ "content": " ",
78
+ "lstrip": false,
79
+ "normalized": true,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": false
83
+ },
84
+ "50266": {
85
+ "content": " ",
86
+ "lstrip": false,
87
+ "normalized": true,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": false
91
+ },
92
+ "50267": {
93
+ "content": " ",
94
+ "lstrip": false,
95
+ "normalized": true,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": false
99
+ },
100
+ "50268": {
101
+ "content": " ",
102
+ "lstrip": false,
103
+ "normalized": true,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": false
107
+ },
108
+ "50269": {
109
+ "content": " ",
110
+ "lstrip": false,
111
+ "normalized": true,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": false
115
+ },
116
+ "50270": {
117
+ "content": " ",
118
+ "lstrip": false,
119
+ "normalized": true,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "50271": {
125
+ "content": " ",
126
+ "lstrip": false,
127
+ "normalized": true,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "50272": {
133
+ "content": " ",
134
+ "lstrip": false,
135
+ "normalized": true,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "50273": {
141
+ "content": " ",
142
+ "lstrip": false,
143
+ "normalized": true,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "50274": {
149
+ "content": " ",
150
+ "lstrip": false,
151
+ "normalized": true,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "50275": {
157
+ "content": " ",
158
+ "lstrip": false,
159
+ "normalized": true,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "50276": {
165
+ "content": " ",
166
+ "lstrip": false,
167
+ "normalized": true,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "50277": {
173
+ "content": " ",
174
+ "lstrip": false,
175
+ "normalized": true,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": false
179
+ },
180
+ "50278": {
181
+ "content": " ",
182
+ "lstrip": false,
183
+ "normalized": true,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": false
187
+ },
188
+ "50279": {
189
+ "content": " ",
190
+ "lstrip": false,
191
+ "normalized": true,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": false
195
+ },
196
+ "50280": {
197
+ "content": " ",
198
+ "lstrip": false,
199
+ "normalized": true,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": false
203
+ },
204
+ "50281": {
205
+ "content": " ",
206
+ "lstrip": false,
207
+ "normalized": true,
208
+ "rstrip": false,
209
+ "single_word": false,
210
+ "special": false
211
+ },
212
+ "50282": {
213
+ "content": " ",
214
+ "lstrip": false,
215
+ "normalized": true,
216
+ "rstrip": false,
217
+ "single_word": false,
218
+ "special": false
219
+ },
220
+ "50283": {
221
+ "content": " ",
222
+ "lstrip": false,
223
+ "normalized": true,
224
+ "rstrip": false,
225
+ "single_word": false,
226
+ "special": false
227
+ },
228
+ "50284": {
229
+ "content": " ",
230
+ "lstrip": false,
231
+ "normalized": true,
232
+ "rstrip": false,
233
+ "single_word": false,
234
+ "special": false
235
+ },
236
+ "50285": {
237
+ "content": " ",
238
+ "lstrip": false,
239
+ "normalized": true,
240
+ "rstrip": false,
241
+ "single_word": false,
242
+ "special": false
243
+ },
244
+ "50286": {
245
+ "content": " ",
246
+ "lstrip": false,
247
+ "normalized": true,
248
+ "rstrip": false,
249
+ "single_word": false,
250
+ "special": false
251
+ },
252
+ "50287": {
253
+ "content": "\t\t\t\t\t\t\t\t\t",
254
+ "lstrip": false,
255
+ "normalized": true,
256
+ "rstrip": false,
257
+ "single_word": false,
258
+ "special": false
259
+ },
260
+ "50288": {
261
+ "content": "\t\t\t\t\t\t\t\t",
262
+ "lstrip": false,
263
+ "normalized": true,
264
+ "rstrip": false,
265
+ "single_word": false,
266
+ "special": false
267
+ },
268
+ "50289": {
269
+ "content": "\t\t\t\t\t\t\t",
270
+ "lstrip": false,
271
+ "normalized": true,
272
+ "rstrip": false,
273
+ "single_word": false,
274
+ "special": false
275
+ },
276
+ "50290": {
277
+ "content": "\t\t\t\t\t\t",
278
+ "lstrip": false,
279
+ "normalized": true,
280
+ "rstrip": false,
281
+ "single_word": false,
282
+ "special": false
283
+ },
284
+ "50291": {
285
+ "content": "\t\t\t\t\t",
286
+ "lstrip": false,
287
+ "normalized": true,
288
+ "rstrip": false,
289
+ "single_word": false,
290
+ "special": false
291
+ },
292
+ "50292": {
293
+ "content": "\t\t\t\t",
294
+ "lstrip": false,
295
+ "normalized": true,
296
+ "rstrip": false,
297
+ "single_word": false,
298
+ "special": false
299
+ },
300
+ "50293": {
301
+ "content": "\t\t\t",
302
+ "lstrip": false,
303
+ "normalized": true,
304
+ "rstrip": false,
305
+ "single_word": false,
306
+ "special": false
307
+ },
308
+ "50294": {
309
+ "content": "\t\t",
310
+ "lstrip": false,
311
+ "normalized": true,
312
+ "rstrip": false,
313
+ "single_word": false,
314
+ "special": false
315
+ }
316
+ },
317
+ "bos_token": "<|endoftext|>",
318
+ "clean_up_tokenization_spaces": true,
319
+ "eos_token": "<|endoftext|>",
320
+ "model_max_length": 2048,
321
+ "tokenizer_class": "CodeGenTokenizer",
322
+ "unk_token": "<|endoftext|>"
323
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff