LoserCheems commited on
Commit
704c96d
1 Parent(s): ebfbcdb

Upload DogeForCausalLM

Browse files
Files changed (6) hide show
  1. README.md +199 -0
  2. config.json +39 -0
  3. configuration_doge.py +197 -0
  4. generation_config.json +7 -0
  5. model.safetensors +3 -0
  6. modeling_doge.py +1144 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./results/doge_22M/checkpoint-5000",
3
+ "architectures": [
4
+ "DogeForCausalLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_doge.DogeConfig",
8
+ "AutoModelForCausalLM": "modeling_doge.DogeForCausalLM"
9
+ },
10
+ "bos_token_id": 1,
11
+ "eos_token_id": 2,
12
+ "hidden_act": "silu",
13
+ "hidden_bias": false,
14
+ "hidden_dropout": 0.0,
15
+ "hidden_size": 256,
16
+ "initializer_range": 0.02,
17
+ "inner_values_retrieval_size": 128,
18
+ "intermediate_size": 1024,
19
+ "max_position_embeddings": 16384,
20
+ "model_type": "doge",
21
+ "num_attention_heads": 2,
22
+ "num_cdmmoe_experts": 1024,
23
+ "num_cdmmoe_experts_per_head": 2,
24
+ "num_cdmmoe_heads": 1,
25
+ "num_hidden_layers": 4,
26
+ "num_inner_value_heads": 1,
27
+ "num_inner_values": 2,
28
+ "num_value_per_head": 1,
29
+ "pad_token_id": 0,
30
+ "private_expert_retrieval_size": 256,
31
+ "rms_norm_eps": 1e-06,
32
+ "rope_scaling": null,
33
+ "rope_theta": 10000.0,
34
+ "tie_word_embeddings": false,
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.46.1",
37
+ "use_cache": true,
38
+ "vocab_size": 32768
39
+ }
configuration_doge.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Jingze Shi and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on the Wonderful Matrices paper implementation.
5
+ #
6
+ # https://arxiv.org/abs/2407.16958
7
+ #
8
+ # Licensed under the Apache License, Version 2.0 (the "License");
9
+ # you may not use this file except in compliance with the License.
10
+ # You may obtain a copy of the License at
11
+ #
12
+ # http://www.apache.org/licenses/LICENSE-2.0
13
+ #
14
+ # Unless required by applicable law or agreed to in writing, software
15
+ # distributed under the License is distributed on an "AS IS" BASIS,
16
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+ # See the License for the specific language governing permissions and
18
+ # limitations under the License.
19
+ """PyTorch Doge model configuration"""
20
+
21
+ from transformers.configuration_utils import PretrainedConfig
22
+ from transformers.modeling_rope_utils import rope_config_validation
23
+
24
+
25
+ class DogeConfig(PretrainedConfig):
26
+ r"""
27
+ This is the configuration class to store the configuration of a [`DogeModel`]. It is used to instantiate an Doge
28
+ model according to the specified arguments, defining the model architecture like [LoserCheems/doge-tiny-test](https://huggingface.co/LoserCheems/doge-tiny-test)
29
+
30
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
31
+ documentation from [`PretrainedConfig`] for more information.
32
+
33
+ Args:
34
+ vocab_size (`int`, *optional*, defaults to 32768):
35
+ Vocabulary size of the Doge model. Defines the number of different tokens that can be represented by the
36
+ `inputs_ids` passed when calling [`DogeModel`]
37
+ hidden_size (`int`, *optional*, defaults to 1024):
38
+ Dimension of the hidden representations.
39
+ intermediate_size (`int`, *optional*, defaults to 4096):
40
+ Dimension of the CDMoE representations.
41
+ num_hidden_layers (`int`, *optional*, defaults to 16):
42
+ Number of hidden layers in the Transformer decoder.
43
+ hidden_bias (`bool`, *optional*, defaults to `False`):
44
+ Whether to use bias in the hidden layers.
45
+ hidden_dropout (`float`, *optional*, defaults to 0.0):
46
+ Dropout probability for each sequence transformation and state transformation module.
47
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
48
+ The non-linear activation function (function or string) in the decoder.
49
+ max_position_embeddings (`int`, *optional*, defaults to 16384):
50
+ The maximum sequence length that this model might ever be used with.
51
+ rope_theta (`float`, *optional*, defaults to 10000.0):
52
+ The base period of the RoPE embeddings.
53
+ rope_scaling (`Dict`, *optional*):
54
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
55
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
56
+ accordingly.
57
+ Expected contents:
58
+ `rope_type` (`str`):
59
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
60
+ 'llama3'], with 'default' being the original RoPE implementation.
61
+ `factor` (`float`, *optional*):
62
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
63
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
64
+ original maximum pre-trained length.
65
+ `original_max_position_embeddings` (`int`, *optional*):
66
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
67
+ pretraining.
68
+ `attention_factor` (`float`, *optional*):
69
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
70
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
71
+ `factor` field to infer the suggested value.
72
+ `beta_fast` (`float`, *optional*):
73
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
74
+ ramp function. If unspecified, it defaults to 32.
75
+ `beta_slow` (`float`, *optional*):
76
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
77
+ ramp function. If unspecified, it defaults to 1.
78
+ `short_factor` (`List[float]`, *optional*):
79
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
80
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
81
+ size divided by the number of attention heads divided by 2
82
+ `long_factor` (`List[float]`, *optional*):
83
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
84
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
85
+ size divided by the number of attention heads divided by 2
86
+ `low_freq_factor` (`float`, *optional*):
87
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
88
+ `high_freq_factor` (`float`, *optional*):
89
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
90
+ initializer_range (`float`, *optional*, defaults to 0.02):
91
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
92
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
93
+ The epsilon used by the rms normalization layers.
94
+ use_cache (`bool`, *optional*, defaults to `True`):
95
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
96
+ relevant if `config.is_decoder=True`.
97
+ pad_token_id (`int`, *optional*, defaults to 0):
98
+ Padding token id.
99
+ bos_token_id (`int`, *optional*, defaults to 1):
100
+ Beginning of stream token id.
101
+ eos_token_id (`int`, *optional*, defaults to 2):
102
+ End of stream token id.
103
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
104
+ Whether to tie weight embeddings
105
+ num_attention_heads (`int`, *optional*, defaults to 8):
106
+ Number of attention heads for each attention layer in the Transformer decoder.
107
+ num_inner_values (`int`, *optional*, defaults to 8):
108
+ Number of inner values for Inner Function Attention.
109
+ num_inner_value_heads (`int`, *optional*, defaults to 4):
110
+ Number of inner value heads for Inner Function Attention.
111
+ num_value_per_head (`int`, *optional*, defaults to 4):
112
+ Number of values per head, can't be greater than `num_inner_values`.
113
+ inner_values_retrieval_size (`int`, *optional*, defaults to 128):
114
+ Dimension of the inner values retrieval states for each attention layer in the Transformer decoder
115
+ private_expert_retrieval_size (`int`, *optional*, defaults to 256):
116
+ Dimension of the Private Expert retrieval states for the Cross Domain Mixture of Experts.
117
+ num_cdmmoe_experts (`int`, *optional*, defaults to 4096):
118
+ Number of Private Experts for the Cross Domain Mixture of Experts.
119
+ num_cdmmoe_heads (`int`, *optional*, defaults to 4):
120
+ Number of heads of Private Experts for the Cross Domain Mixture of Experts.
121
+ num_cdmmoe_experts_per_head (`int`, *optional*, defaults to 8):
122
+ Number of Private Experts per head for the Cross Domain Mixture of Experts.
123
+ """
124
+
125
+ model_type = "doge"
126
+ keys_to_ignore_at_inference = ["past_key_values"]
127
+
128
+ def __init__(
129
+ self,
130
+ vocab_size=32768,
131
+ hidden_size=1024,
132
+ intermediate_size=4096,
133
+ num_hidden_layers=16,
134
+ hidden_bias=False,
135
+ hidden_dropout=0.0,
136
+ hidden_act="silu",
137
+ max_position_embeddings=16384,
138
+ rope_theta=10000.0,
139
+ rope_scaling=None,
140
+ initializer_range=0.02,
141
+ rms_norm_eps=1e-06,
142
+ use_cache=True,
143
+ pad_token_id=0,
144
+ bos_token_id=1,
145
+ eos_token_id=2,
146
+ tie_word_embeddings=False,
147
+ num_attention_heads=8,
148
+ num_inner_values=8,
149
+ num_inner_value_heads=4,
150
+ num_value_per_head=4,
151
+ inner_values_retrieval_size=128,
152
+ private_expert_retrieval_size=256,
153
+ num_cdmmoe_experts=4096,
154
+ num_cdmmoe_heads=4,
155
+ num_cdmmoe_experts_per_head=8,
156
+ **kwargs,
157
+ ):
158
+ self.vocab_size = vocab_size
159
+ self.hidden_size = hidden_size
160
+ self.intermediate_size = intermediate_size
161
+ self.num_hidden_layers = num_hidden_layers
162
+ self.hidden_bias = hidden_bias
163
+ self.hidden_dropout = hidden_dropout
164
+ self.hidden_act = hidden_act
165
+ self.max_position_embeddings = max_position_embeddings
166
+ self.rope_theta = rope_theta
167
+ self.rope_scaling = rope_scaling
168
+ self.initializer_range = initializer_range
169
+ self.rms_norm_eps = rms_norm_eps
170
+ self.use_cache = use_cache
171
+ self.pad_token_id = pad_token_id
172
+ self.bos_token_id = bos_token_id
173
+ self.eos_token_id = eos_token_id
174
+ self.tie_word_embeddings = tie_word_embeddings
175
+ self.num_attention_heads = num_attention_heads
176
+ self.num_inner_values = num_inner_values
177
+ self.num_inner_value_heads = num_inner_value_heads
178
+ self.num_value_per_head = num_value_per_head
179
+ self.inner_values_retrieval_size = inner_values_retrieval_size
180
+ self.private_expert_retrieval_size = private_expert_retrieval_size
181
+ self.num_cdmmoe_experts = num_cdmmoe_experts
182
+ self.num_cdmmoe_heads = num_cdmmoe_heads
183
+ self.num_cdmmoe_experts_per_head = num_cdmmoe_experts_per_head
184
+
185
+ # Validate the correctness of rotary position embeddings parameters
186
+ # BC: if there is a 'type' field, copy it it to 'rope_type'.
187
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
188
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
189
+ rope_config_validation(self)
190
+
191
+ super().__init__(
192
+ pad_token_id=pad_token_id,
193
+ bos_token_id=bos_token_id,
194
+ eos_token_id=eos_token_id,
195
+ tie_word_embeddings=tie_word_embeddings,
196
+ **kwargs,
197
+ )
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.46.1"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00f12450a160fad400ac0e0fc288a5541c54cb4b02b05ed9b42a067f0c7e39f5
3
+ size 89288368
modeling_doge.py ADDED
@@ -0,0 +1,1144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Jingze Shi and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on the Wonderful Matrices paper implementation.
5
+ #
6
+ # https://arxiv.org/abs/2407.16958
7
+ #
8
+ # Licensed under the Apache License, Version 2.0 (the "License");
9
+ # you may not use this file except in compliance with the License.
10
+ # You may obtain a copy of the License at
11
+ #
12
+ # http://www.apache.org/licenses/LICENSE-2.0
13
+ #
14
+ # Unless required by applicable law or agreed to in writing, software
15
+ # distributed under the License is distributed on an "AS IS" BASIS,
16
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+ # See the License for the specific language governing permissions and
18
+ # limitations under the License.
19
+ """PyTorch Doge model."""
20
+
21
+ import math
22
+ from typing import List, Optional, Tuple, Union
23
+
24
+ import torch
25
+ import torch.nn.functional as F
26
+ import torch.utils.checkpoint
27
+ from torch import nn
28
+
29
+ from transformers.activations import ACT2FN
30
+ from transformers.cache_utils import Cache, DynamicCache, StaticCache
31
+ from transformers.generation import GenerationMixin
32
+ from transformers.modeling_outputs import (
33
+ BaseModelOutputWithPast,
34
+ CausalLMOutputWithPast,
35
+ SequenceClassifierOutputWithPast,
36
+ )
37
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS
38
+ from transformers.modeling_utils import PreTrainedModel
39
+ from transformers.utils import (
40
+ add_start_docstrings,
41
+ add_start_docstrings_to_model_forward,
42
+ # is_einx_available,
43
+ logging,
44
+ replace_return_docstrings,
45
+ )
46
+ from .configuration_doge import DogeConfig
47
+
48
+
49
+
50
+ from einx import add as einx_add
51
+
52
+
53
+
54
+ logger = logging.get_logger(__name__)
55
+
56
+ _CONFIG_FOR_DOC = "DogeConfig"
57
+
58
+
59
+ class RMSNorm(nn.Module):
60
+ def __init__(self, hidden_size, eps=1e-6):
61
+ """
62
+ RMSNorm is equivalent to T5LayerNorm
63
+ """
64
+ super().__init__()
65
+ self.weight = nn.Parameter(torch.ones(hidden_size))
66
+ self.variance_epsilon = eps
67
+
68
+ def forward(self, hidden_states):
69
+ input_dtype = hidden_states.dtype
70
+ hidden_states = hidden_states.to(torch.float32)
71
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
72
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
73
+ return self.weight * hidden_states.to(input_dtype)
74
+
75
+ def extra_repr(self):
76
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
77
+
78
+
79
+ class RotaryEmbedding(nn.Module):
80
+ def __init__(self, config: Optional[DogeConfig] = None):
81
+ super().__init__()
82
+ self.rope_kwargs = {}
83
+
84
+ if config.rope_scaling is None:
85
+ self.rope_type = "default"
86
+ else:
87
+ self.rope_type = config.rope_scaling
88
+ self.max_seq_len_cached = config.max_position_embeddings
89
+ self.original_max_seq_len = config.max_position_embeddings
90
+ self.base = config.rope_theta
91
+
92
+ self.config = config
93
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
94
+
95
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, **self.rope_kwargs)
96
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
97
+ self.original_inv_freq = self.inv_freq
98
+
99
+ def _dynamic_frequency_update(self, position_ids, device):
100
+ """
101
+ dynamic RoPE layers should recompute `inv_freq` in the following situations:
102
+ 1 - growing beyond the cached sequence length (allow scaling)
103
+ 2 - the current sequence length is in the original scale (avoid losing precision with small sequences)
104
+ """
105
+ seq_len = torch.max(position_ids) + 1
106
+ if seq_len > self.max_seq_len_cached: # growth
107
+ inv_freq, self.attention_scaling = self.rope_init_fn(
108
+ self.config, device, seq_len=seq_len, **self.rope_kwargs
109
+ )
110
+ self.register_buffer("inv_freq", inv_freq, persistent=False) # TODO joao: may break with compilation
111
+ self.max_seq_len_cached = seq_len
112
+
113
+ if seq_len < self.original_max_seq_len and self.max_seq_len_cached > self.original_max_seq_len: # reset
114
+ self.register_buffer("inv_freq", self.original_inv_freq, persistent=False)
115
+ self.max_seq_len_cached = self.original_max_seq_len
116
+
117
+ @torch.no_grad()
118
+ def forward(self, x, position_ids):
119
+ if "dynamic" in self.rope_type:
120
+ self._dynamic_frequency_update(position_ids, device=x.device)
121
+
122
+ # core RoPE block
123
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
124
+ position_ids_expanded = position_ids[:, None, :].float()
125
+ device_type = x.device.type
126
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
127
+ with torch.autocast(device_type=device_type, enabled=False):
128
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
129
+ emb = torch.cat((freqs, freqs), dim=-1)
130
+ cos = emb.cos()
131
+ sin = emb.sin()
132
+
133
+ cos = cos * self.attention_scaling
134
+ sin = sin * self.attention_scaling
135
+
136
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
137
+
138
+
139
+ def rotate_half(x):
140
+ """
141
+ Rotates half the hidden dims of the input.
142
+ """
143
+ x1 = x[..., : x.shape[-1] // 2]
144
+ x2 = x[..., x.shape[-1] // 2 :]
145
+ return torch.cat((-x2, x1), dim=-1)
146
+
147
+
148
+ def apply_QK_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
149
+ """Applies Rotary Position Embedding to the query and key tensors.
150
+
151
+ Args:
152
+ q (`torch.Tensor`): The query tensor.
153
+ k (`torch.Tensor`): The key tensor.
154
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
155
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
156
+ position_ids (`torch.Tensor`, *optional*):
157
+ Deprecated and unused.
158
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
159
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
160
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
161
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
162
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
163
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
164
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
165
+ Returns:
166
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
167
+ """
168
+ cos = cos.unsqueeze(unsqueeze_dim)
169
+ sin = sin.unsqueeze(unsqueeze_dim)
170
+ q_embed = (q * cos) + (rotate_half(q) * sin)
171
+ k_embed = (k * cos) + (rotate_half(k) * sin)
172
+ return q_embed, k_embed
173
+
174
+
175
+ class DogeInnerFuncAttn(nn.Module):
176
+ """Inner Function Attention from 'Wonderful Matrices' paper."""
177
+
178
+ def __init__(self, config: DogeConfig, layer_idx: Optional[int] = None):
179
+ super().__init__()
180
+
181
+ self.config = config
182
+ self.layer_idx = layer_idx
183
+ if layer_idx is None:
184
+ logger.warning_once(
185
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
186
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
187
+ "when creating this class."
188
+ )
189
+
190
+ self.hidden_dim = config.hidden_size
191
+ self.num_attention_heads = config.num_attention_heads
192
+
193
+ # for accuracy of attention scores, we do not use GQA
194
+ self.attention_head_dim = self.hidden_dim // self.num_attention_heads
195
+ self.num_inner_values = config.num_inner_values
196
+ self.num_inner_value_heads = config.num_inner_value_heads
197
+ self.num_value_per_head = config.num_value_per_head
198
+ self.inner_values_retrieval_dim = config.inner_values_retrieval_size
199
+
200
+ # Q and K projections
201
+ self.q_proj = nn.Linear(
202
+ self.hidden_dim,
203
+ self.num_attention_heads * self.attention_head_dim,
204
+ bias=config.hidden_bias,
205
+ )
206
+ self.k_proj = nn.Linear(
207
+ self.hidden_dim,
208
+ self.num_attention_heads * self.attention_head_dim,
209
+ bias=config.hidden_bias,
210
+ )
211
+
212
+ # dynamic mask for the QK^T attention score matrix
213
+ self.dynamic_mask = nn.Parameter(
214
+ torch.round(torch.ones(self.num_attention_heads, config.max_position_embeddings))
215
+ )
216
+
217
+ # queries and keys for retrieval V
218
+ self.v_queries = nn.Linear(
219
+ self.hidden_dim,
220
+ self.num_inner_value_heads * self.inner_values_retrieval_dim,
221
+ bias=config.hidden_bias,
222
+ )
223
+ self.v_keys = nn.Parameter(
224
+ torch.zeros(
225
+ self.num_inner_value_heads,
226
+ self.inner_values_retrieval_dim,
227
+ self.num_inner_values,
228
+ )
229
+ )
230
+
231
+ # V for inner function
232
+ self.v_embed = nn.Embedding(
233
+ self.num_inner_values,
234
+ self.hidden_dim,
235
+ )
236
+
237
+ self.o_proj = nn.Linear(
238
+ self.hidden_dim,
239
+ self.hidden_dim,
240
+ bias=config.hidden_bias,
241
+ )
242
+
243
+ def _update_causal_mask(
244
+ self,
245
+ attention_mask: torch.Tensor = None,
246
+ input_tensor: torch.Tensor = None,
247
+ cache_position: torch.Tensor = None,
248
+ past_key_values: Cache = None,
249
+ output_attentions: bool = False,
250
+ ):
251
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
252
+ using_static_cache = isinstance(past_key_values, StaticCache)
253
+
254
+ dtype, device = input_tensor.dtype, input_tensor.device
255
+ sequence_length = input_tensor.shape[1]
256
+ if using_static_cache:
257
+ target_length = past_key_values.get_max_cache_shape()
258
+ else:
259
+ target_length = (
260
+ attention_mask.shape[-1]
261
+ if isinstance(attention_mask, torch.Tensor)
262
+ else past_seen_tokens + sequence_length + 1
263
+ )
264
+
265
+ # in case the provided `attention` mask is 2D, we generate a causal mask here (4D).
266
+ causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position_and_dynamic_mask(
267
+ attention_mask=attention_mask,
268
+ dynamic_mask=self.dynamic_mask,
269
+ sequence_length=sequence_length,
270
+ target_length=target_length,
271
+ dtype=dtype,
272
+ device=device,
273
+ cache_position=cache_position,
274
+ batch_size=input_tensor.shape[0],
275
+ )
276
+
277
+ return causal_mask
278
+
279
+ @staticmethod
280
+ def _prepare_4d_causal_attention_mask_with_cache_position_and_dynamic_mask(
281
+ attention_mask: torch.Tensor = None,
282
+ dynamic_mask: torch.Tensor = None,
283
+ sequence_length: int = None,
284
+ target_length: int = None,
285
+ dtype: torch.dtype = None,
286
+ device: torch.device = None,
287
+ cache_position: torch.Tensor = None,
288
+ batch_size: int = None,
289
+ **kwargs,
290
+ ):
291
+ """
292
+ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
293
+ `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
294
+
295
+ Args:
296
+ attention_mask (`torch.Tensor`):
297
+ A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape
298
+ `(batch_size, 1, query_length, key_value_length)`.
299
+ dynamic_mask (`torch.Tensor`):
300
+ A 2D dynamic mask of shape `(num_heads, max_position_embeddings)`.
301
+ sequence_length (`int`):
302
+ The sequence length being processed.
303
+ target_length (`int`):
304
+ The target length: when generating with static cache, the mask should be as long as the static cache,
305
+ to account for the 0 padding, the part of the cache that is not filled yet.
306
+ dtype (`torch.dtype`):
307
+ The dtype to use for the 4D attention mask.
308
+ device (`torch.device`):
309
+ The device to plcae the 4D attention mask on.
310
+ cache_position (`torch.Tensor`):
311
+ Indices depicting the position of the input sequence tokens in the sequence.
312
+ batch_size (`torch.Tensor`):
313
+ Batch size.
314
+ """
315
+ if attention_mask is not None and attention_mask.dim() == 4:
316
+ # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
317
+ causal_mask = attention_mask
318
+ else:
319
+ num_heads = 1 if dynamic_mask is None else dynamic_mask.size(0)
320
+ min_dtype = torch.finfo(dtype).min
321
+ causal_mask = torch.full(
322
+ (sequence_length, target_length),
323
+ fill_value=min_dtype,
324
+ dtype=dtype,
325
+ device=device,
326
+ )
327
+ if sequence_length != 1:
328
+ causal_mask = torch.triu(causal_mask, diagonal=1)
329
+ causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
330
+ causal_mask = causal_mask[None, None, :, :].expand(batch_size, num_heads, -1, -1)
331
+ if attention_mask is not None:
332
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
333
+ mask_length = attention_mask.shape[-1]
334
+ attention_mask = attention_mask[:, None, None, :].expand(-1, num_heads, 1, -1)
335
+ if dynamic_mask is not None:
336
+ dynamic_mask = dynamic_mask[None, :, None, :mask_length].expand(batch_size, -1, 1, -1)
337
+ attention_mask = attention_mask.clone() * dynamic_mask
338
+
339
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask
340
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
341
+ padding_mask == 0, min_dtype
342
+ )
343
+
344
+ return causal_mask
345
+
346
+ def inner_func(
347
+ self,
348
+ hidden_states: torch.Tensor,
349
+ ) -> torch.Tensor:
350
+ """
351
+ Each value can share weights with other values to increase the expressive power
352
+ """
353
+ bsz, seq_len, _ = hidden_states.shape
354
+
355
+ v_queries = self.v_queries(hidden_states)
356
+ v_queries = v_queries.view(bsz, seq_len, self.num_inner_value_heads, -1).transpose(1, 2)
357
+ sim = torch.matmul(v_queries, self.v_keys).transpose(1, 2)
358
+ v_embed = self.v_embed(sim.topk(k=self.num_value_per_head, dim=-1).indices)
359
+ v = hidden_states * v_embed.sum(dim=-2).sum(dim=-2)
360
+ return v
361
+
362
+ def forward(
363
+ self,
364
+ hidden_states: torch.Tensor,
365
+ attention_mask: Optional[torch.Tensor] = None,
366
+ position_ids: Optional[torch.LongTensor] = None,
367
+ past_key_value: Optional[Cache] = None,
368
+ cache_position: Optional[torch.LongTensor] = None,
369
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
370
+ **kwargs,
371
+ ) -> Tuple[torch.Tensor, Optional[Cache]]:
372
+ bsz, seq_len, _ = hidden_states.shape
373
+
374
+ query_states = self.q_proj(hidden_states)
375
+ key_states = self.k_proj(hidden_states)
376
+ value_states = self.inner_func(hidden_states)
377
+
378
+ query_states = query_states.view(bsz, seq_len, self.num_attention_heads, self.attention_head_dim).transpose(
379
+ 1, 2
380
+ )
381
+ key_states = key_states.view(bsz, seq_len, self.num_attention_heads, self.attention_head_dim).transpose(
382
+ 1, 2
383
+ )
384
+ value_states = value_states.view(bsz, seq_len, self.num_attention_heads, self.attention_head_dim).transpose(
385
+ 1, 2
386
+ )
387
+
388
+ cos, sin = position_embeddings
389
+ query_states, query_states = apply_QK_rotary_pos_emb(query_states, query_states, cos, sin)
390
+
391
+ if past_key_value is not None:
392
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
393
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
394
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
395
+
396
+ # compute attention scores matrix
397
+ attn_weights = torch.matmul(query_states, key_states.transpose(-1, -2)) / math.sqrt(self.attention_head_dim)
398
+
399
+ # add mask to attention scores
400
+ causal_mask = self._update_causal_mask(attention_mask, hidden_states, cache_position, past_key_value)
401
+ causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
402
+ attn_weights = attn_weights + causal_mask
403
+
404
+ # upcast attention scores to fp32
405
+ attn_weights = F.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
406
+
407
+ # apply attention scores to value states
408
+ attn_output = torch.matmul(attn_weights, value_states)
409
+
410
+ attn_output = attn_output.transpose(1, 2).contiguous()
411
+ attn_output = attn_output.reshape(bsz, seq_len, -1)
412
+ attn_output = self.o_proj(attn_output)
413
+
414
+ return attn_output, past_key_value
415
+
416
+
417
+ class DogeCDMoE(nn.Module):
418
+ """Cross-Domain Mixture of Experts from 'Wonderful Matrices' paper."""
419
+
420
+ def __init__(self, config: DogeConfig):
421
+ super().__init__()
422
+ self.hidden_dim = config.hidden_size
423
+ self.act_fn = ACT2FN[config.hidden_act]
424
+ self.intermediate_dim = config.intermediate_size
425
+
426
+ self.private_expert_retrieval_dim = config.private_expert_retrieval_size
427
+ self.num_cdmmoe_experts = config.num_cdmmoe_experts
428
+ self.num_cdmmoe_heads = config.num_cdmmoe_heads
429
+ self.num_cdmmoe_experts_per_head = config.num_cdmmoe_experts_per_head
430
+
431
+ # cross domain
432
+ self.up_proj = nn.Linear(
433
+ self.hidden_dim,
434
+ self.intermediate_dim,
435
+ bias=config.hidden_bias,
436
+ )
437
+ self.down_proj = nn.Linear(
438
+ self.intermediate_dim,
439
+ self.hidden_dim,
440
+ bias=config.hidden_bias,
441
+ )
442
+
443
+ # queries and keys for retrieval private experts
444
+ self.queries = nn.Linear(
445
+ self.hidden_dim,
446
+ self.num_cdmmoe_heads * self.private_expert_retrieval_dim,
447
+ bias=False,
448
+ )
449
+ self.num_keys = int(math.sqrt(self.num_cdmmoe_experts))
450
+ self.keys = nn.Parameter(
451
+ torch.zeros(
452
+ self.num_cdmmoe_heads,
453
+ self.num_keys,
454
+ 2,
455
+ self.private_expert_retrieval_dim // 2,
456
+ )
457
+ )
458
+
459
+ # private experts
460
+ self.down_embed = nn.Embedding(
461
+ self.num_cdmmoe_experts,
462
+ self.hidden_dim,
463
+ )
464
+ self.up_embed = nn.Embedding(
465
+ self.num_cdmmoe_experts,
466
+ self.hidden_dim,
467
+ )
468
+
469
+
470
+ def forward(
471
+ self,
472
+ hidden_states: torch.Tensor,
473
+ **kwargs,
474
+ ) -> Tuple[torch.Tensor, torch.Tensor]:
475
+ bsz, seq_len, _ = hidden_states.shape
476
+
477
+ # get similarity with queries and keys
478
+ queries = self.queries(hidden_states)
479
+ queries = queries.view(bsz, seq_len, 2, self.num_cdmmoe_heads, -1).permute(2, 0, 1, 3, 4)
480
+ sim = torch.einsum("p b t h n, h k p n -> p b t h k", queries, self.keys)
481
+
482
+ # get expert scores and indices with the highest similarity
483
+ (scores_x, scores_y), (indices_x, indices_y) = sim.topk(self.num_cdmmoe_experts_per_head, dim=-1)
484
+ if einx_add is not None:
485
+ all_scores = einx_add("... i, ... j -> ... (i j)", scores_x, scores_y)
486
+ all_indices = einx_add("... i, ... j -> ... (i j)", indices_x * self.num_keys, indices_y)
487
+ else:
488
+ all_scores = scores_x.unsqueeze(-1) + scores_y.unsqueeze(-2)
489
+ all_scores = all_scores.view(*scores_x.shape[:-1], -1)
490
+ all_indices = (indices_x.unsqueeze(-1) * self.num_keys) + indices_y.unsqueeze(-2)
491
+ all_indices = all_indices.view(*indices_x.shape[:-1], -1)
492
+ scores, pk_indices = all_scores.topk(self.num_cdmmoe_experts_per_head, dim=-1)
493
+ indices = all_indices.gather(-1, pk_indices)
494
+
495
+ # get related expert embeddings based on indices
496
+ down_embed = self.down_embed(indices)
497
+ up_embed = self.up_embed(indices)
498
+
499
+ # efficient retrieval of private experts
500
+ experts_weights = self.act_fn(torch.einsum("b t d, b t h k d -> b t h k", hidden_states, down_embed) * scores.softmax(dim=-1))
501
+ experts_states = torch.einsum("b t h k, b t h k d -> b t d", experts_weights, up_embed)
502
+
503
+ # mix with shared parameters of cross domain
504
+ hidden_states = self.down_proj(self.act_fn(self.up_proj(hidden_states)))
505
+ hidden_states = hidden_states + experts_states
506
+ return hidden_states
507
+
508
+
509
+ class DogeDecoderLayer(nn.Module):
510
+ def __init__(self, config: DogeConfig, layer_idx: Optional[int] = None):
511
+ super().__init__()
512
+ self.hidden_dropout = config.hidden_dropout
513
+
514
+ self.in_attn_layernorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
515
+ self.attn = DogeInnerFuncAttn(config, layer_idx)
516
+ self.in_ff_layernorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
517
+ self.feed_forward = DogeCDMoE(config)
518
+
519
+ def forward(
520
+ self,
521
+ hidden_states: torch.Tensor,
522
+ attention_mask: Optional[torch.Tensor] = None,
523
+ position_ids: Optional[torch.LongTensor] = None,
524
+ past_key_value: Optional[Cache] = None,
525
+ output_attentions: Optional[bool] = False,
526
+ use_cache: Optional[bool] = False,
527
+ cache_position: Optional[torch.LongTensor] = None,
528
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,
529
+ **kwargs,
530
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
531
+ """
532
+ Args:
533
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
534
+ attention_mask (`torch.FloatTensor`, *optional*):
535
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
536
+ query_sequence_length, key_sequence_length)` if default attention is used.
537
+ output_attentions (`bool`, *optional*):
538
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
539
+ returned tensors for more detail.
540
+ use_cache (`bool`, *optional*):
541
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
542
+ (see `past_key_values`).
543
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
544
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
545
+ Indices depicting the position of the input sequence tokens in the sequence
546
+ position_embeddings (`Tuple[torch.FloatTensor, torch.FloatTensor]`, *optional*):
547
+ Tuple containing the cosine and sine positional embeddings of shape `(batch_size, seq_len, head_dim)`,
548
+ with `head_dim` being the embedding dimension of each attention head.
549
+ kwargs (`dict`, *optional*):
550
+ Arbitrary kwargs to be ignored, used for FSDP and other methods that injects code
551
+ into the model
552
+ """
553
+
554
+ # sequence transformation
555
+ residual = hidden_states
556
+ hidden_states = self.in_attn_layernorm(hidden_states)
557
+ hidden_states, present_key_value = self.attn(
558
+ hidden_states=hidden_states,
559
+ attention_mask=attention_mask,
560
+ position_ids=position_ids,
561
+ past_key_value=past_key_value,
562
+ cache_position=cache_position,
563
+ position_embeddings=position_embeddings,
564
+ **kwargs,
565
+ )
566
+ self_attn_weights = None
567
+ hidden_states = F.dropout(hidden_states, p=self.hidden_dropout, training=self.training)
568
+ hidden_states = residual + hidden_states
569
+
570
+ # state transformation
571
+ residual = hidden_states
572
+ hidden_states = self.in_ff_layernorm(hidden_states)
573
+ hidden_states = self.feed_forward(hidden_states)
574
+ hidden_states = F.dropout(hidden_states, p=self.hidden_dropout, training=self.training)
575
+ hidden_states = residual + hidden_states
576
+
577
+ outputs = (hidden_states,)
578
+
579
+ if output_attentions:
580
+ outputs += (self_attn_weights,)
581
+
582
+ if use_cache:
583
+ outputs += (present_key_value,)
584
+
585
+ return outputs
586
+
587
+
588
+ @add_start_docstrings("The bare Doge Model outputting raw hidden-states without any specific head on top.")
589
+ class DogePreTrainedModel(PreTrainedModel):
590
+ config_class = DogeConfig
591
+ base_model_prefix = "model"
592
+ supports_gradient_checkpointing = True
593
+ _no_split_modules = ["DogeDecoderLayer"]
594
+ _skip_keys_device_placement = ["past_key_values"]
595
+ _supports_cache_class = True
596
+ _supports_quantized_cache = True
597
+ _supports_static_cache = True
598
+
599
+ def _init_weights(self, module):
600
+ std = self.config.initializer_range
601
+ if isinstance(module, (nn.Linear)):
602
+ module.weight.data.normal_(mean=0.0, std=std)
603
+ if module.bias is not None:
604
+ module.bias.data.zero_()
605
+ elif isinstance(module, nn.Embedding):
606
+ module.weight.data.normal_(mean=0.0, std=std)
607
+ if module.padding_idx is not None:
608
+ module.weight.data[module.padding_idx].zero_()
609
+
610
+
611
+ DOGE_INPUTS_DOCSTRING = r"""
612
+ Args:
613
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
614
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
615
+ it.
616
+
617
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
618
+ [`PreTrainedTokenizer.__call__`] for details.
619
+
620
+ [What are input IDs?](../glossary#input-ids)
621
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
622
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
623
+
624
+ - 1 for tokens that are **not masked**,
625
+ - 0 for tokens that are **masked**.
626
+
627
+ [What are attention masks?](../glossary#attention-mask)
628
+
629
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
630
+ [`PreTrainedTokenizer.__call__`] for details.
631
+
632
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
633
+ `past_key_values`).
634
+
635
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
636
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
637
+ information on the default strategy.
638
+
639
+ - 1 indicates the head is **not masked**,
640
+ - 0 indicates the head is **masked**.
641
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
642
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
643
+ config.n_positions - 1]`.
644
+
645
+ [What are position IDs?](../glossary#position-ids)
646
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
647
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
648
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
649
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
650
+
651
+ Two formats are allowed:
652
+ - a [`~cache_utils.Cache`] instance, see our
653
+ [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache);
654
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
655
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
656
+ cache format.
657
+
658
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
659
+ legacy cache format will be returned.
660
+
661
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
662
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
663
+ of shape `(batch_size, sequence_length)`.
664
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
665
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
666
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
667
+ model's internal embedding lookup matrix.
668
+ use_cache (`bool`, *optional*):
669
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
670
+ `past_key_values`).
671
+ output_attentions (`bool`, *optional*):
672
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
673
+ tensors for more detail.
674
+ output_hidden_states (`bool`, *optional*):
675
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
676
+ more detail.
677
+ return_dict (`bool`, *optional*):
678
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
679
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
680
+ Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`,
681
+ this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
682
+ the complete sequence length.
683
+ """
684
+
685
+
686
+ @add_start_docstrings("The bare Doge Model outputting raw hidden-states without any specific head on top.")
687
+ class DogeModel(DogePreTrainedModel):
688
+ def __init__(self, config: DogeConfig):
689
+ super().__init__(config)
690
+ self.config = config
691
+ self.padding_idx = config.pad_token_id
692
+ self.vocab_size = config.vocab_size
693
+
694
+ self.word_embed = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
695
+ self.rotary_emb = RotaryEmbedding(config)
696
+ self.layers = nn.ModuleList(
697
+ [DogeDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
698
+ )
699
+ self.final_layernorm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
700
+ self.gradient_checkpointing = False
701
+
702
+ # Initialize weights and apply final processing
703
+ self.post_init()
704
+
705
+ def get_input_embeddings(self):
706
+ return self.word_embed
707
+
708
+ def set_input_embeddings(self, value):
709
+ self.word_embed = value
710
+
711
+ @add_start_docstrings_to_model_forward(DOGE_INPUTS_DOCSTRING)
712
+ def forward(
713
+ self,
714
+ input_ids: torch.LongTensor = None,
715
+ attention_mask: Optional[torch.Tensor] = None,
716
+ position_ids: Optional[torch.LongTensor] = None,
717
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
718
+ inputs_embeds: Optional[torch.FloatTensor] = None,
719
+ use_cache: Optional[bool] = None,
720
+ output_attentions: Optional[bool] = None,
721
+ output_hidden_states: Optional[bool] = None,
722
+ return_dict: Optional[bool] = None,
723
+ cache_position: Optional[torch.LongTensor] = None,
724
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
725
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
726
+ output_hidden_states = (
727
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
728
+ )
729
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
730
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
731
+
732
+ if (input_ids is None) ^ (inputs_embeds is not None):
733
+ raise ValueError("You cannot specify both input_ids and inputs_embeds")
734
+
735
+ if self.gradient_checkpointing and self.training and use_cache:
736
+ logger.warning_once(
737
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
738
+ )
739
+ use_cache = False
740
+
741
+ if inputs_embeds is None:
742
+ inputs_embeds = self.word_embed(input_ids)
743
+
744
+ # kept for BC (non `Cache` `past_key_values` inputs)
745
+ return_legacy_cache = False
746
+ if use_cache and not isinstance(past_key_values, Cache):
747
+ return_legacy_cache = True
748
+ if past_key_values is None:
749
+ past_key_values = DynamicCache()
750
+ else:
751
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
752
+ logger.warning_once(
753
+ "We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and "
754
+ "will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class "
755
+ "(https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)"
756
+ )
757
+
758
+ if cache_position is None:
759
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
760
+ cache_position = torch.arange(
761
+ past_seen_tokens,
762
+ past_seen_tokens + inputs_embeds.shape[1],
763
+ device=inputs_embeds.device,
764
+ )
765
+ if position_ids is None:
766
+ position_ids = cache_position.unsqueeze(0)
767
+
768
+ # causal_mask = self._update_causal_mask(
769
+ # attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
770
+ # )
771
+ hidden_states = inputs_embeds
772
+
773
+ # create position embeddings to be shared across the decoder layers
774
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
775
+
776
+ # decoder layers
777
+ all_hidden_states = () if output_hidden_states else None
778
+ all_self_attns = () if output_attentions else None
779
+
780
+ for decoder_layer in self.layers:
781
+ if output_hidden_states:
782
+ all_hidden_states += (hidden_states,)
783
+
784
+ if self.gradient_checkpointing and self.training:
785
+ layer_outputs = self._gradient_checkpointing_func(
786
+ decoder_layer.__call__,
787
+ hidden_states,
788
+ attention_mask,
789
+ position_ids,
790
+ past_key_values,
791
+ output_attentions,
792
+ use_cache,
793
+ cache_position,
794
+ position_embeddings,
795
+ )
796
+ else:
797
+ layer_outputs = decoder_layer(
798
+ hidden_states,
799
+ attention_mask=attention_mask,
800
+ position_ids=position_ids,
801
+ past_key_value=past_key_values,
802
+ output_attentions=output_attentions,
803
+ use_cache=use_cache,
804
+ cache_position=cache_position,
805
+ position_embeddings=position_embeddings,
806
+ )
807
+
808
+ hidden_states = layer_outputs[0]
809
+
810
+ if use_cache:
811
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
812
+
813
+ if output_attentions:
814
+ all_self_attns += (layer_outputs[1],)
815
+
816
+ hidden_states = self.final_layernorm(hidden_states)
817
+
818
+ # add hidden states from the last decoder layer
819
+ if output_hidden_states:
820
+ all_hidden_states += (hidden_states,)
821
+
822
+ next_cache = next_decoder_cache if use_cache else None
823
+ if return_legacy_cache:
824
+ next_cache = next_cache.to_legacy_cache()
825
+
826
+ if not return_dict:
827
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
828
+
829
+ return BaseModelOutputWithPast(
830
+ last_hidden_state=hidden_states,
831
+ past_key_values=next_cache,
832
+ hidden_states=all_hidden_states,
833
+ attentions=all_self_attns,
834
+ )
835
+
836
+ """Move to DogeInnerFuncAttn"""
837
+ # def _update_causal_mask(
838
+ # self,
839
+ # attention_mask: torch.Tensor,
840
+ # input_tensor: torch.Tensor,
841
+ # cache_position: torch.Tensor,
842
+ # past_key_values: Cache,
843
+ # output_attentions: bool,
844
+ # ):
845
+ # # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in
846
+ # # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail
847
+ # # to infer the attention mask.
848
+ # past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
849
+ # using_static_cache = isinstance(past_key_values, StaticCache)
850
+
851
+ # dtype, device = input_tensor.dtype, input_tensor.device
852
+ # sequence_length = input_tensor.shape[1]
853
+ # if using_static_cache:
854
+ # target_length = past_key_values.get_max_cache_shape()
855
+ # else:
856
+ # target_length = (
857
+ # attention_mask.shape[-1]
858
+ # if isinstance(attention_mask, torch.Tensor)
859
+ # else past_seen_tokens + sequence_length + 1
860
+ # )
861
+
862
+ # # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
863
+ # causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
864
+ # attention_mask,
865
+ # sequence_length=sequence_length,
866
+ # target_length=target_length,
867
+ # dtype=dtype,
868
+ # device=device,
869
+ # cache_position=cache_position,
870
+ # batch_size=input_tensor.shape[0],
871
+ # )
872
+
873
+ # return causal_mask
874
+
875
+ # @staticmethod
876
+ # def _prepare_4d_causal_attention_mask_with_cache_position(
877
+ # attention_mask: torch.Tensor,
878
+ # sequence_length: int,
879
+ # target_length: int,
880
+ # dtype: torch.dtype,
881
+ # device: torch.device,
882
+ # cache_position: torch.Tensor,
883
+ # batch_size: int,
884
+ # **kwargs,
885
+ # ):
886
+ # """
887
+ # Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
888
+ # `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
889
+
890
+ # Args:
891
+ # attention_mask (`torch.Tensor`):
892
+ # A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape
893
+ # `(batch_size, 1, query_length, key_value_length)`.
894
+ # sequence_length (`int`):
895
+ # The sequence length being processed.
896
+ # target_length (`int`):
897
+ # The target length: when generating with static cache, the mask should be as long as the static cache,
898
+ # to account for the 0 padding, the part of the cache that is not filled yet.
899
+ # dtype (`torch.dtype`):
900
+ # The dtype to use for the 4D attention mask.
901
+ # device (`torch.device`):
902
+ # The device to plcae the 4D attention mask on.
903
+ # cache_position (`torch.Tensor`):
904
+ # Indices depicting the position of the input sequence tokens in the sequence.
905
+ # batch_size (`torch.Tensor`):
906
+ # Batch size.
907
+ # """
908
+ # if attention_mask is not None and attention_mask.dim() == 4:
909
+ # # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
910
+ # causal_mask = attention_mask
911
+ # else:
912
+ # min_dtype = torch.finfo(dtype).min
913
+ # causal_mask = torch.full(
914
+ # (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
915
+ # )
916
+ # if sequence_length != 1:
917
+ # causal_mask = torch.triu(causal_mask, diagonal=1)
918
+ # causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
919
+ # causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
920
+ # if attention_mask is not None:
921
+ # causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
922
+ # mask_length = attention_mask.shape[-1]
923
+ # padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
924
+ # padding_mask = padding_mask == 0
925
+ # causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
926
+ # padding_mask, min_dtype
927
+ # )
928
+
929
+ # return causal_mask
930
+
931
+
932
+ class DogeForCausalLM(DogePreTrainedModel, GenerationMixin):
933
+ _tied_weights_keys = ["lm_head.weight"]
934
+
935
+ def __init__(self, config: DogeConfig):
936
+ super().__init__(config)
937
+ self.config = config
938
+ self.model = DogeModel(config)
939
+ self.vocab_size = config.vocab_size
940
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
941
+
942
+ # Initialize weights and apply final processing
943
+ self.post_init()
944
+
945
+ def get_input_embeddings(self):
946
+ return self.model.word_embed
947
+
948
+ def set_input_embeddings(self, value):
949
+ self.model.word_embed = value
950
+
951
+ def get_output_embeddings(self):
952
+ return self.lm_head
953
+
954
+ def set_output_embeddings(self, new_embeddings):
955
+ self.lm_head = new_embeddings
956
+
957
+ def set_decoder(self, decoder):
958
+ self.model = decoder
959
+
960
+ def get_decoder(self):
961
+ return self.model
962
+
963
+ @add_start_docstrings_to_model_forward(DOGE_INPUTS_DOCSTRING)
964
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
965
+ def forward(
966
+ self,
967
+ input_ids: torch.LongTensor = None,
968
+ attention_mask: Optional[torch.Tensor] = None,
969
+ position_ids: Optional[torch.LongTensor] = None,
970
+ past_key_values: Optional[torch.Tensor] = None,
971
+ inputs_embeds: Optional[torch.FloatTensor] = None,
972
+ labels: Optional[torch.LongTensor] = None,
973
+ use_cache: Optional[bool] = None,
974
+ output_attentions: Optional[bool] = None,
975
+ output_hidden_states: Optional[bool] = None,
976
+ return_dict: Optional[bool] = None,
977
+ cache_position: Optional[torch.LongTensor] = None,
978
+ num_logits_to_keep: int = 0,
979
+ **loss_kwargs,
980
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
981
+ r"""
982
+ Args:
983
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
984
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
985
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
986
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
987
+
988
+ num_logits_to_keep (`int`, *optional*):
989
+ Calculate logits for the last `num_logits_to_keep` tokens. If `0`, calculate logits for all
990
+ `input_ids` (special case). Only last token logits are needed for generation, and calculating them only for that
991
+ token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
992
+
993
+ Returns:
994
+ """
995
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
996
+ output_hidden_states = (
997
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
998
+ )
999
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1000
+
1001
+ # decoder output consists of (dec_features, layer_state, dec_hidden, dec_attn)
1002
+ outputs = self.model(
1003
+ input_ids=input_ids,
1004
+ attention_mask=attention_mask,
1005
+ position_ids=position_ids,
1006
+ past_key_values=past_key_values,
1007
+ inputs_embeds=inputs_embeds,
1008
+ use_cache=use_cache,
1009
+ output_attentions=output_attentions,
1010
+ output_hidden_states=output_hidden_states,
1011
+ return_dict=return_dict,
1012
+ cache_position=cache_position,
1013
+ )
1014
+
1015
+ hidden_states = outputs[0]
1016
+
1017
+ # only compute necessary logits, and do not upcast them to float if we are not computing the loss
1018
+ logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :])
1019
+
1020
+ loss = None
1021
+ if labels is not None:
1022
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.vocab_size, **loss_kwargs)
1023
+
1024
+ if not return_dict:
1025
+ output = (logits,) + outputs[1:]
1026
+ return (loss,) + output if loss is not None else output
1027
+
1028
+ return CausalLMOutputWithPast(
1029
+ loss=loss,
1030
+ logits=logits,
1031
+ past_key_values=outputs.past_key_values,
1032
+ hidden_states=outputs.hidden_states,
1033
+ attentions=outputs.attentions,
1034
+ )
1035
+
1036
+
1037
+ @add_start_docstrings(
1038
+ """
1039
+ The Doge Model transformer with a sequence classification head on top (linear layer).
1040
+
1041
+ [`DogeForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1042
+ (e.g. GPT-2) do.
1043
+
1044
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1045
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1046
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1047
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1048
+ each row of the batch).
1049
+ """
1050
+ )
1051
+ class DogeForSequenceClassification(DogePreTrainedModel):
1052
+ def __init__(self, config: DogeConfig):
1053
+ super().__init__(config)
1054
+ self.config = config
1055
+ self.num_labels = config.num_labels
1056
+
1057
+ self.model = DogeModel(config)
1058
+ self.classifier = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1059
+
1060
+ # Initialize weights and apply final processing
1061
+ self.init_weights()
1062
+
1063
+ def get_input_embeddings(self):
1064
+ return self.model.word_embed
1065
+
1066
+ def set_input_embeddings(self, value):
1067
+ self.model.word_embed = value
1068
+
1069
+ @add_start_docstrings_to_model_forward(DOGE_INPUTS_DOCSTRING)
1070
+ def forward(
1071
+ self,
1072
+ input_ids: Optional[torch.LongTensor] = None,
1073
+ attention_mask: Optional[torch.Tensor] = None,
1074
+ position_ids: Optional[torch.LongTensor] = None,
1075
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1076
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1077
+ labels: Optional[torch.LongTensor] = None,
1078
+ use_cache: Optional[bool] = None,
1079
+ output_attentions: Optional[bool] = None,
1080
+ output_hidden_states: Optional[bool] = None,
1081
+ return_dict: Optional[bool] = None,
1082
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1083
+ r"""
1084
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1085
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1086
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1087
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1088
+ """
1089
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1090
+
1091
+ outputs = self.model(
1092
+ input_ids=input_ids,
1093
+ attention_mask=attention_mask,
1094
+ position_ids=position_ids,
1095
+ past_key_values=past_key_values,
1096
+ inputs_embeds=inputs_embeds,
1097
+ use_cache=use_cache,
1098
+ output_attentions=output_attentions,
1099
+ output_hidden_states=output_hidden_states,
1100
+ return_dict=return_dict,
1101
+ )
1102
+ hidden_states = outputs[0]
1103
+ logits = self.classifier(hidden_states)
1104
+
1105
+ if input_ids is not None:
1106
+ batch_size = input_ids.shape[0]
1107
+ else:
1108
+ batch_size = inputs_embeds.shape[0]
1109
+
1110
+ if self.config.pad_token_id is None and batch_size != 1:
1111
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1112
+ if self.config.pad_token_id is None:
1113
+ sequence_lengths = -1
1114
+ else:
1115
+ if input_ids is not None:
1116
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
1117
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
1118
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
1119
+ sequence_lengths = sequence_lengths.to(logits.device)
1120
+ else:
1121
+ sequence_lengths = -1
1122
+
1123
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1124
+
1125
+ loss = None
1126
+ if labels is not None:
1127
+ loss = self.loss_function(
1128
+ logits=logits,
1129
+ labels=labels,
1130
+ pooled_logits=pooled_logits,
1131
+ config=self.config,
1132
+ )
1133
+
1134
+ if not return_dict:
1135
+ output = (pooled_logits,) + outputs[1:]
1136
+ return ((loss,) + output) if loss is not None else output
1137
+
1138
+ return SequenceClassifierOutputWithPast(
1139
+ loss=loss,
1140
+ logits=pooled_logits,
1141
+ past_key_values=outputs.past_key_values,
1142
+ hidden_states=outputs.hidden_states,
1143
+ attentions=outputs.attentions,
1144
+ )