jonathanjordan21 commited on
Commit
aa152b7
·
verified ·
1 Parent(s): 28e442f

Create modeling_qwen2_nomic_vision.py

Browse files
Files changed (1) hide show
  1. modeling_qwen2_nomic_vision.py +1509 -0
modeling_qwen2_nomic_vision.py ADDED
@@ -0,0 +1,1509 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """PyTorch Qwen2 model."""
21
+
22
+ import math
23
+ from typing import List, Optional, Tuple, Union
24
+
25
+ import torch
26
+ import torch.utils.checkpoint
27
+ from torch import nn
28
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
29
+
30
+ from transformers.activations import ACT2FN
31
+ from transformers.cache_utils import Cache, DynamicCache, SlidingWindowCache, StaticCache
32
+ from transformers.generation import GenerationMixin
33
+ from transformers.modeling_attn_mask_utils import AttentionMaskConverter
34
+ from transformers.modeling_outputs import (
35
+ BaseModelOutputWithPast,
36
+ CausalLMOutputWithPast,
37
+ QuestionAnsweringModelOutput,
38
+ SequenceClassifierOutputWithPast,
39
+ TokenClassifierOutput,
40
+ )
41
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS
42
+ from transformers.modeling_utils import PreTrainedModel
43
+ from transformers.utils import (
44
+ add_code_sample_docstrings,
45
+ add_start_docstrings,
46
+ add_start_docstrings_to_model_forward,
47
+ is_flash_attn_2_available,
48
+ is_flash_attn_greater_or_equal_2_10,
49
+ logging,
50
+ replace_return_docstrings,
51
+ )
52
+ from transformers.configuration_qwen2 import Qwen2Config
53
+ from transformers import AutoModel, AutoImageProcessor
54
+
55
+ if is_flash_attn_2_available():
56
+ from transformers.modeling_flash_attention_utils import _flash_attention_forward
57
+
58
+
59
+ logger = logging.get_logger(__name__)
60
+
61
+
62
+ _CHECKPOINT_FOR_DOC = "Qwen/Qwen2-7B"
63
+ _CONFIG_FOR_DOC = "Qwen2Config"
64
+
65
+
66
+ # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->Qwen2
67
+ class Qwen2RMSNorm(nn.Module):
68
+ def __init__(self, hidden_size, eps=1e-6):
69
+ """
70
+ Qwen2RMSNorm is equivalent to T5LayerNorm
71
+ """
72
+ super().__init__()
73
+ self.weight = nn.Parameter(torch.ones(hidden_size))
74
+ self.variance_epsilon = eps
75
+
76
+ def forward(self, hidden_states):
77
+ input_dtype = hidden_states.dtype
78
+ hidden_states = hidden_states.to(torch.float32)
79
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
80
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
81
+ return self.weight * hidden_states.to(input_dtype)
82
+
83
+ def extra_repr(self):
84
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
85
+
86
+
87
+ # Copied from transformers.models.llama.modeling_llama.LlamaRotaryEmbedding with Llama->Qwen2
88
+ class Qwen2RotaryEmbedding(nn.Module):
89
+ def __init__(
90
+ self,
91
+ dim=None,
92
+ max_position_embeddings=2048,
93
+ base=10000,
94
+ device=None,
95
+ scaling_factor=1.0,
96
+ rope_type="default",
97
+ config: Optional[Qwen2Config] = None,
98
+ ):
99
+ super().__init__()
100
+ # TODO (joao): remove the `if` below, only used for BC
101
+ self.rope_kwargs = {}
102
+ if config is None:
103
+ logger.warning_once(
104
+ "`Qwen2RotaryEmbedding` can now be fully parameterized by passing the model config through the "
105
+ "`config` argument. All other arguments will be removed in v4.46"
106
+ )
107
+ self.rope_kwargs = {
108
+ "rope_type": rope_type,
109
+ "factor": scaling_factor,
110
+ "dim": dim,
111
+ "base": base,
112
+ "max_position_embeddings": max_position_embeddings,
113
+ }
114
+ self.rope_type = rope_type
115
+ self.max_seq_len_cached = max_position_embeddings
116
+ self.original_max_seq_len = max_position_embeddings
117
+ else:
118
+ # BC: "rope_type" was originally "type"
119
+ if config.rope_scaling is not None:
120
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
121
+ else:
122
+ self.rope_type = "default"
123
+ self.max_seq_len_cached = config.max_position_embeddings
124
+ self.original_max_seq_len = config.max_position_embeddings
125
+
126
+ self.config = config
127
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
128
+
129
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device, **self.rope_kwargs)
130
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
131
+ self.original_inv_freq = self.inv_freq
132
+
133
+ def _dynamic_frequency_update(self, position_ids, device):
134
+ """
135
+ dynamic RoPE layers should recompute `inv_freq` in the following situations:
136
+ 1 - growing beyond the cached sequence length (allow scaling)
137
+ 2 - the current sequence length is in the original scale (avoid losing precision with small sequences)
138
+ """
139
+ seq_len = torch.max(position_ids) + 1
140
+ if seq_len > self.max_seq_len_cached: # growth
141
+ inv_freq, self.attention_scaling = self.rope_init_fn(
142
+ self.config, device, seq_len=seq_len, **self.rope_kwargs
143
+ )
144
+ self.register_buffer("inv_freq", inv_freq, persistent=False) # TODO joao: may break with compilation
145
+ self.max_seq_len_cached = seq_len
146
+
147
+ if seq_len < self.original_max_seq_len and self.max_seq_len_cached > self.original_max_seq_len: # reset
148
+ self.register_buffer("inv_freq", self.original_inv_freq, persistent=False)
149
+ self.max_seq_len_cached = self.original_max_seq_len
150
+
151
+ @torch.no_grad()
152
+ def forward(self, x, position_ids):
153
+ if "dynamic" in self.rope_type:
154
+ self._dynamic_frequency_update(position_ids, device=x.device)
155
+
156
+ # Core RoPE block
157
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
158
+ position_ids_expanded = position_ids[:, None, :].float()
159
+ # Force float32 (see https://github.com/huggingface/transformers/pull/29285)
160
+ device_type = x.device.type
161
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
162
+ with torch.autocast(device_type=device_type, enabled=False):
163
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
164
+ emb = torch.cat((freqs, freqs), dim=-1)
165
+ cos = emb.cos()
166
+ sin = emb.sin()
167
+
168
+ # Advanced RoPE types (e.g. yarn) apply a post-processing scaling factor, equivalent to scaling attention
169
+ cos = cos * self.attention_scaling
170
+ sin = sin * self.attention_scaling
171
+
172
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
173
+
174
+
175
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
176
+ def rotate_half(x):
177
+ """Rotates half the hidden dims of the input."""
178
+ x1 = x[..., : x.shape[-1] // 2]
179
+ x2 = x[..., x.shape[-1] // 2 :]
180
+ return torch.cat((-x2, x1), dim=-1)
181
+
182
+
183
+ # Copied from transformers.models.llama.modeling_llama.apply_rotary_pos_emb
184
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
185
+ """Applies Rotary Position Embedding to the query and key tensors.
186
+
187
+ Args:
188
+ q (`torch.Tensor`): The query tensor.
189
+ k (`torch.Tensor`): The key tensor.
190
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
191
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
192
+ position_ids (`torch.Tensor`, *optional*):
193
+ Deprecated and unused.
194
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
195
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
196
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
197
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
198
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
199
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
200
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
201
+ Returns:
202
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
203
+ """
204
+ cos = cos.unsqueeze(unsqueeze_dim)
205
+ sin = sin.unsqueeze(unsqueeze_dim)
206
+ q_embed = (q * cos) + (rotate_half(q) * sin)
207
+ k_embed = (k * cos) + (rotate_half(k) * sin)
208
+ return q_embed, k_embed
209
+
210
+
211
+ # Copied from transformers.models.mistral.modeling_mistral.MistralMLP with Mistral->Qwen2
212
+ class Qwen2MLP(nn.Module):
213
+ def __init__(self, config):
214
+ super().__init__()
215
+ self.hidden_size = config.hidden_size
216
+ self.intermediate_size = config.intermediate_size
217
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
218
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
219
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
220
+ self.act_fn = ACT2FN[config.hidden_act]
221
+
222
+ def forward(self, hidden_state):
223
+ return self.down_proj(self.act_fn(self.gate_proj(hidden_state)) * self.up_proj(hidden_state))
224
+
225
+
226
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv
227
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
228
+ """
229
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
230
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
231
+ """
232
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
233
+ if n_rep == 1:
234
+ return hidden_states
235
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
236
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
237
+
238
+
239
+ class Qwen2Attention(nn.Module):
240
+ """
241
+ Multi-headed attention from 'Attention Is All You Need' paper. Modified to use sliding window attention: Longformer
242
+ and "Generating Long Sequences with Sparse Transformers".
243
+ """
244
+
245
+ def __init__(self, config: Qwen2Config, layer_idx: Optional[int] = None):
246
+ super().__init__()
247
+ self.config = config
248
+ self.layer_idx = layer_idx
249
+ if layer_idx is None:
250
+ logger.warning_once(
251
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
252
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
253
+ "when creating this class."
254
+ )
255
+
256
+ self.hidden_size = config.hidden_size
257
+ self.num_heads = config.num_attention_heads
258
+ self.head_dim = self.hidden_size // self.num_heads
259
+ self.num_key_value_heads = config.num_key_value_heads
260
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
261
+ self.max_position_embeddings = config.max_position_embeddings
262
+ self.rope_theta = config.rope_theta
263
+ self.is_causal = True
264
+ self.attention_dropout = config.attention_dropout
265
+
266
+ if (self.head_dim * self.num_heads) != self.hidden_size:
267
+ raise ValueError(
268
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
269
+ f" and `num_heads`: {self.num_heads})."
270
+ )
271
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=True)
272
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
273
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
274
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
275
+
276
+ self.rotary_emb = Qwen2RotaryEmbedding(config=self.config)
277
+
278
+ def forward(
279
+ self,
280
+ hidden_states: torch.Tensor,
281
+ attention_mask: Optional[torch.Tensor] = None,
282
+ position_ids: Optional[torch.LongTensor] = None,
283
+ past_key_value: Optional[Cache] = None,
284
+ output_attentions: bool = False,
285
+ use_cache: bool = False,
286
+ cache_position: Optional[torch.LongTensor] = None,
287
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
288
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
289
+ bsz, q_len, _ = hidden_states.size()
290
+
291
+ query_states = self.q_proj(hidden_states)
292
+ key_states = self.k_proj(hidden_states)
293
+ value_states = self.v_proj(hidden_states)
294
+
295
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
296
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
297
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
298
+
299
+ if position_embeddings is None:
300
+ logger.warning_once(
301
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
302
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
303
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
304
+ "removed and `position_embeddings` will be mandatory."
305
+ )
306
+ cos, sin = self.rotary_emb(value_states, position_ids)
307
+ else:
308
+ cos, sin = position_embeddings
309
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
310
+
311
+ if past_key_value is not None:
312
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} # Specific to RoPE models
313
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
314
+
315
+ # repeat k/v heads if n_kv_heads < n_heads
316
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
317
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
318
+
319
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
320
+ if attention_mask is not None: # no matter the length, we just slice it
321
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
322
+ attn_weights = attn_weights + causal_mask
323
+
324
+ # upcast attention to fp32
325
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
326
+ attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
327
+ attn_output = torch.matmul(attn_weights, value_states)
328
+
329
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
330
+ raise ValueError(
331
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
332
+ f" {attn_output.size()}"
333
+ )
334
+
335
+ attn_output = attn_output.transpose(1, 2).contiguous()
336
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
337
+
338
+ attn_output = self.o_proj(attn_output)
339
+
340
+ if not output_attentions:
341
+ attn_weights = None
342
+
343
+ return attn_output, attn_weights, past_key_value
344
+
345
+
346
+ class Qwen2FlashAttention2(Qwen2Attention):
347
+ """
348
+ Qwen2 flash attention module, following Qwen2 attention module. This module inherits from `Qwen2Attention`
349
+ as the weights of the module stays untouched. The only required change would be on the forward pass
350
+ where it needs to correctly call the public API of flash attention and deal with padding tokens
351
+ in case the input contains any of them. Additionally, for sliding window attention, we apply SWA only to the bottom
352
+ config.max_window_layers layers.
353
+ """
354
+
355
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2.__init__
356
+ def __init__(self, *args, **kwargs):
357
+ super().__init__(*args, **kwargs)
358
+
359
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
360
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
361
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
362
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
363
+
364
+ def forward(
365
+ self,
366
+ hidden_states: torch.Tensor,
367
+ attention_mask: Optional[torch.Tensor] = None,
368
+ position_ids: Optional[torch.LongTensor] = None,
369
+ past_key_value: Optional[Cache] = None,
370
+ output_attentions: bool = False,
371
+ use_cache: bool = False,
372
+ cache_position: Optional[torch.LongTensor] = None,
373
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
374
+ ):
375
+ bsz, q_len, _ = hidden_states.size()
376
+
377
+ query_states = self.q_proj(hidden_states)
378
+ key_states = self.k_proj(hidden_states)
379
+ value_states = self.v_proj(hidden_states)
380
+
381
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
382
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
383
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
384
+
385
+ if position_embeddings is None:
386
+ logger.warning_once(
387
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
388
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
389
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
390
+ "removed and `position_embeddings` will be mandatory."
391
+ )
392
+ cos, sin = self.rotary_emb(value_states, position_ids)
393
+ else:
394
+ cos, sin = position_embeddings
395
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
396
+
397
+ if past_key_value is not None:
398
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} # Specific to RoPE models
399
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
400
+
401
+ # repeat k/v heads if n_kv_heads < n_heads
402
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
403
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
404
+ dropout_rate = 0.0 if not self.training else self.attention_dropout
405
+
406
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
407
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
408
+ # cast them back in float16 just to be sure everything works as expected.
409
+ input_dtype = query_states.dtype
410
+ if input_dtype == torch.float32:
411
+ if torch.is_autocast_enabled():
412
+ target_dtype = torch.get_autocast_gpu_dtype()
413
+ # Handle the case where the model is quantized
414
+ elif hasattr(self.config, "_pre_quantization_dtype"):
415
+ target_dtype = self.config._pre_quantization_dtype
416
+ else:
417
+ target_dtype = self.q_proj.weight.dtype
418
+
419
+ logger.warning_once(
420
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
421
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
422
+ f" {target_dtype}."
423
+ )
424
+
425
+ query_states = query_states.to(target_dtype)
426
+ key_states = key_states.to(target_dtype)
427
+ value_states = value_states.to(target_dtype)
428
+
429
+ # Reashape to the expected shape for Flash Attention
430
+ query_states = query_states.transpose(1, 2)
431
+ key_states = key_states.transpose(1, 2)
432
+ value_states = value_states.transpose(1, 2)
433
+
434
+ if (
435
+ self.config.use_sliding_window
436
+ and getattr(self.config, "sliding_window", None) is not None
437
+ and self.layer_idx >= self.config.max_window_layers
438
+ ):
439
+ sliding_window = self.config.sliding_window
440
+ else:
441
+ sliding_window = None
442
+
443
+ attn_output = _flash_attention_forward(
444
+ query_states,
445
+ key_states,
446
+ value_states,
447
+ attention_mask,
448
+ q_len,
449
+ position_ids=position_ids,
450
+ dropout=dropout_rate,
451
+ sliding_window=sliding_window,
452
+ is_causal=self.is_causal,
453
+ use_top_left_mask=self._flash_attn_uses_top_left_mask,
454
+ )
455
+
456
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
457
+ attn_output = self.o_proj(attn_output)
458
+
459
+ if not output_attentions:
460
+ attn_weights = None
461
+
462
+ return attn_output, attn_weights, past_key_value
463
+
464
+
465
+ class Qwen2SdpaAttention(Qwen2Attention):
466
+ """
467
+ Qwen2 attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
468
+ `Qwen2Attention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
469
+ SDPA API.
470
+ """
471
+
472
+ # Adapted from Qwen2Attention.forward
473
+ def forward(
474
+ self,
475
+ hidden_states: torch.Tensor,
476
+ attention_mask: Optional[torch.Tensor] = None,
477
+ position_ids: Optional[torch.LongTensor] = None,
478
+ past_key_value: Optional[Cache] = None,
479
+ output_attentions: bool = False,
480
+ use_cache: bool = False,
481
+ cache_position: Optional[torch.LongTensor] = None,
482
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
483
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
484
+ if output_attentions:
485
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
486
+ logger.warning_once(
487
+ "Qwen2NomicVisionModel is using Qwen2SdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
488
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
489
+ )
490
+ return super().forward(
491
+ hidden_states=hidden_states,
492
+ attention_mask=attention_mask,
493
+ position_ids=position_ids,
494
+ past_key_value=past_key_value,
495
+ output_attentions=output_attentions,
496
+ use_cache=use_cache,
497
+ )
498
+
499
+ bsz, q_len, _ = hidden_states.size()
500
+
501
+ query_states = self.q_proj(hidden_states)
502
+ key_states = self.k_proj(hidden_states)
503
+ value_states = self.v_proj(hidden_states)
504
+
505
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
506
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
507
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
508
+
509
+ if position_embeddings is None:
510
+ logger.warning_once(
511
+ "The attention layers in this model are transitioning from computing the RoPE embeddings internally "
512
+ "through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
513
+ "`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.46 `position_ids` will be "
514
+ "removed and `position_embeddings` will be mandatory."
515
+ )
516
+ cos, sin = self.rotary_emb(value_states, position_ids)
517
+ else:
518
+ cos, sin = position_embeddings
519
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
520
+
521
+ if past_key_value is not None:
522
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} # Specific to RoPE models
523
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
524
+
525
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
526
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
527
+
528
+ causal_mask = attention_mask
529
+ if attention_mask is not None: # no matter the length, we just slice it
530
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
531
+
532
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
533
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
534
+ if query_states.device.type == "cuda" and attention_mask is not None:
535
+ query_states = query_states.contiguous()
536
+ key_states = key_states.contiguous()
537
+ value_states = value_states.contiguous()
538
+
539
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
540
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
541
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
542
+ is_causal = True if causal_mask is None and q_len > 1 else False
543
+
544
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
545
+ query_states,
546
+ key_states,
547
+ value_states,
548
+ attn_mask=causal_mask,
549
+ dropout_p=self.attention_dropout if self.training else 0.0,
550
+ is_causal=is_causal,
551
+ )
552
+
553
+ attn_output = attn_output.transpose(1, 2).contiguous()
554
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
555
+
556
+ attn_output = self.o_proj(attn_output)
557
+
558
+ return attn_output, None, past_key_value
559
+
560
+
561
+ QWEN2_ATTENTION_CLASSES = {
562
+ "eager": Qwen2Attention,
563
+ "flash_attention_2": Qwen2FlashAttention2,
564
+ "sdpa": Qwen2SdpaAttention,
565
+ }
566
+
567
+
568
+ class Qwen2DecoderLayer(nn.Module):
569
+ def __init__(self, config: Qwen2Config, layer_idx: int):
570
+ super().__init__()
571
+ self.hidden_size = config.hidden_size
572
+
573
+ if config.sliding_window and config._attn_implementation != "flash_attention_2":
574
+ logger.warning_once(
575
+ f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
576
+ "unexpected results may be encountered."
577
+ )
578
+ self.self_attn = QWEN2_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx)
579
+
580
+ self.mlp = Qwen2MLP(config)
581
+ self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
582
+ self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
583
+
584
+ def forward(
585
+ self,
586
+ hidden_states: torch.Tensor,
587
+ attention_mask: Optional[torch.Tensor] = None,
588
+ position_ids: Optional[torch.LongTensor] = None,
589
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
590
+ output_attentions: Optional[bool] = False,
591
+ use_cache: Optional[bool] = False,
592
+ cache_position: Optional[torch.LongTensor] = None,
593
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.46
594
+ **kwargs,
595
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
596
+ """
597
+ Args:
598
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
599
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
600
+ `(batch, sequence_length)` where padding elements are indicated by 0.
601
+ output_attentions (`bool`, *optional*):
602
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
603
+ returned tensors for more detail.
604
+ use_cache (`bool`, *optional*):
605
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
606
+ (see `past_key_values`).
607
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
608
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
609
+ Indices depicting the position of the input sequence tokens in the sequence.
610
+ position_embeddings (`Tuple[torch.FloatTensor, torch.FloatTensor]`, *optional*):
611
+ Tuple containing the cosine and sine positional embeddings of shape `(batch_size, seq_len, head_dim)`,
612
+ with `head_dim` being the embedding dimension of each attention head.
613
+ kwargs (`dict`, *optional*):
614
+ Arbitrary kwargs to be ignored, used for FSDP and other methods that injects code
615
+ into the model
616
+ """
617
+
618
+ residual = hidden_states
619
+
620
+ hidden_states = self.input_layernorm(hidden_states)
621
+
622
+ # Self Attention
623
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
624
+ hidden_states=hidden_states,
625
+ attention_mask=attention_mask,
626
+ position_ids=position_ids,
627
+ past_key_value=past_key_value,
628
+ output_attentions=output_attentions,
629
+ use_cache=use_cache,
630
+ cache_position=cache_position,
631
+ position_embeddings=position_embeddings,
632
+ )
633
+ hidden_states = residual + hidden_states
634
+
635
+ # Fully Connected
636
+ residual = hidden_states
637
+ hidden_states = self.post_attention_layernorm(hidden_states)
638
+ hidden_states = self.mlp(hidden_states)
639
+ hidden_states = residual + hidden_states
640
+
641
+ outputs = (hidden_states,)
642
+
643
+ if output_attentions:
644
+ outputs += (self_attn_weights,)
645
+
646
+ if use_cache:
647
+ outputs += (present_key_value,)
648
+
649
+ return outputs
650
+
651
+
652
+ QWEN2_START_DOCSTRING = r"""
653
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
654
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
655
+ etc.)
656
+
657
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
658
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
659
+ and behavior.
660
+
661
+ Parameters:
662
+ config ([`Qwen2Config`]):
663
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
664
+ load the weights associated with the model, only the configuration. Check out the
665
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
666
+ """
667
+
668
+
669
+ @add_start_docstrings(
670
+ "The bare Qwen2 Model outputting raw hidden-states without any specific head on top.",
671
+ QWEN2_START_DOCSTRING,
672
+ )
673
+ class Qwen2PreTrainedModel(PreTrainedModel):
674
+ config_class = Qwen2Config
675
+ base_model_prefix = "model"
676
+ supports_gradient_checkpointing = True
677
+ _no_split_modules = ["Qwen2DecoderLayer"]
678
+ _skip_keys_device_placement = "past_key_values"
679
+ _supports_flash_attn_2 = True
680
+ _supports_sdpa = True
681
+ _supports_cache_class = True
682
+ _supports_quantized_cache = True
683
+ _supports_static_cache = True
684
+
685
+ def _init_weights(self, module):
686
+ std = self.config.initializer_range
687
+ if isinstance(module, nn.Linear):
688
+ module.weight.data.normal_(mean=0.0, std=std)
689
+ if module.bias is not None:
690
+ module.bias.data.zero_()
691
+ elif isinstance(module, nn.Embedding):
692
+ module.weight.data.normal_(mean=0.0, std=std)
693
+ if module.padding_idx is not None:
694
+ module.weight.data[module.padding_idx].zero_()
695
+
696
+
697
+ QWEN2_INPUTS_DOCSTRING = r"""
698
+ Args:
699
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
700
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
701
+ it.
702
+
703
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
704
+ [`PreTrainedTokenizer.__call__`] for details.
705
+
706
+ [What are input IDs?](../glossary#input-ids)
707
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
708
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
709
+
710
+ - 1 for tokens that are **not masked**,
711
+ - 0 for tokens that are **masked**.
712
+
713
+ [What are attention masks?](../glossary#attention-mask)
714
+
715
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
716
+ [`PreTrainedTokenizer.__call__`] for details.
717
+
718
+ If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
719
+ `past_key_values`).
720
+
721
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
722
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
723
+ information on the default strategy.
724
+
725
+ - 1 indicates the head is **not masked**,
726
+ - 0 indicates the head is **masked**.
727
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
728
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
729
+ config.n_positions - 1]`.
730
+
731
+ [What are position IDs?](../glossary#position-ids)
732
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
733
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
734
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
735
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
736
+
737
+ Two formats are allowed:
738
+ - a [`~cache_utils.Cache`] instance, see our
739
+ [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache);
740
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
741
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
742
+ cache format.
743
+
744
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
745
+ legacy cache format will be returned.
746
+
747
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
748
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
749
+ of shape `(batch_size, sequence_length)`.
750
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
751
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
752
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
753
+ model's internal embedding lookup matrix.
754
+ use_cache (`bool`, *optional*):
755
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
756
+ `past_key_values`).
757
+ output_attentions (`bool`, *optional*):
758
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
759
+ tensors for more detail.
760
+ output_hidden_states (`bool`, *optional*):
761
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
762
+ more detail.
763
+ return_dict (`bool`, *optional*):
764
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
765
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
766
+ Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`,
767
+ this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
768
+ the complete sequence length.
769
+ """
770
+
771
+
772
+ @add_start_docstrings(
773
+ "The bare Qwen2 Model outputting raw hidden-states without any specific head on top.",
774
+ QWEN2_START_DOCSTRING,
775
+ )
776
+ class Qwen2NomicVisionModel(Qwen2PreTrainedModel):
777
+ """
778
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Qwen2DecoderLayer`]
779
+
780
+ Args:
781
+ config: Qwen2Config
782
+ """
783
+
784
+ def __init__(self, config: Qwen2Config):
785
+ super().__init__(config)
786
+ self.padding_idx = config.pad_token_id
787
+ self.vocab_size = config.vocab_size
788
+
789
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
790
+ self.layers = nn.ModuleList(
791
+ [Qwen2DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
792
+ )
793
+ self._attn_implementation = config._attn_implementation
794
+ self.norm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
795
+ self.rotary_emb = Qwen2RotaryEmbedding(config=config)
796
+
797
+ # Additional Vision Model
798
+ self.processor = AutoImageProcessor.from_pretrained("nomic-ai/nomic-embed-vision-v1.5")
799
+ self.vision_model = AutoModel.from_pretrained("nomic-ai/nomic-embed-vision-v1.5", trust_remote_code=True)
800
+ self.mix_linear = nn.Linear([x for x in vision_model.parameters()][-1].shape[0], config.hidden_size)
801
+
802
+ self.gradient_checkpointing = False
803
+ # Initialize weights and apply final processing
804
+ self.post_init()
805
+
806
+ def get_input_embeddings(self):
807
+ return self.embed_tokens
808
+
809
+ def set_input_embeddings(self, value):
810
+ self.embed_tokens = value
811
+
812
+ @add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING)
813
+ def forward(
814
+ self,
815
+ input_ids: torch.LongTensor = None,
816
+ attention_mask: Optional[torch.Tensor] = None,
817
+ position_ids: Optional[torch.LongTensor] = None,
818
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
819
+ inputs_embeds: Optional[torch.FloatTensor] = None,
820
+ use_cache: Optional[bool] = None,
821
+ output_attentions: Optional[bool] = None,
822
+ output_hidden_states: Optional[bool] = None,
823
+ return_dict: Optional[bool] = None,
824
+ cache_position: Optional[torch.LongTensor] = None,
825
+ image = None,
826
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
827
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
828
+ output_hidden_states = (
829
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
830
+ )
831
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
832
+
833
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
834
+
835
+ if (input_ids is None) ^ (inputs_embeds is not None):
836
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
837
+
838
+ if self.gradient_checkpointing and self.training:
839
+ if use_cache:
840
+ logger.warning_once(
841
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
842
+ )
843
+ use_cache = False
844
+
845
+ # kept for BC (non `Cache` `past_key_values` inputs)
846
+ return_legacy_cache = False
847
+ if use_cache and not isinstance(past_key_values, Cache):
848
+ return_legacy_cache = True
849
+ if past_key_values is None:
850
+ past_key_values = DynamicCache()
851
+ else:
852
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
853
+ logger.warning_once(
854
+ "We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and "
855
+ "will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class "
856
+ "(https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)"
857
+ )
858
+
859
+ if inputs_embeds is None:
860
+ inputs_embeds = self.embed_tokens(input_ids)
861
+
862
+ if cache_position is None:
863
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
864
+ cache_position = torch.arange(
865
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
866
+ )
867
+ if position_ids is None:
868
+ position_ids = cache_position.unsqueeze(0)
869
+
870
+ causal_mask = self._update_causal_mask(
871
+ attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
872
+ )
873
+
874
+ hidden_states = inputs_embeds
875
+
876
+ # create position embeddings to be shared across the decoder layers
877
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
878
+
879
+ # decoder layers
880
+ all_hidden_states = () if output_hidden_states else None
881
+ all_self_attns = () if output_attentions else None
882
+ next_decoder_cache = None
883
+
884
+ for decoder_layer in self.layers:
885
+ if output_hidden_states:
886
+ all_hidden_states += (hidden_states,)
887
+
888
+ if self.gradient_checkpointing and self.training:
889
+ layer_outputs = self._gradient_checkpointing_func(
890
+ decoder_layer.__call__,
891
+ hidden_states,
892
+ causal_mask,
893
+ position_ids,
894
+ past_key_values,
895
+ output_attentions,
896
+ use_cache,
897
+ cache_position,
898
+ position_embeddings,
899
+ )
900
+ else:
901
+ layer_outputs = decoder_layer(
902
+ hidden_states,
903
+ attention_mask=causal_mask,
904
+ position_ids=position_ids,
905
+ past_key_value=past_key_values,
906
+ output_attentions=output_attentions,
907
+ use_cache=use_cache,
908
+ cache_position=cache_position,
909
+ position_embeddings=position_embeddings,
910
+ )
911
+
912
+ hidden_states = layer_outputs[0]
913
+
914
+ if use_cache:
915
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
916
+
917
+ if output_attentions:
918
+ all_self_attns += (layer_outputs[1],)
919
+
920
+ hidden_states = self.norm(hidden_states)
921
+
922
+ if image:
923
+ image_tokens = self.processor(image, return_tensors='pt')
924
+ img_emb = self.vision_model(**image_tokens).last_hidden_state
925
+ img_embeddings = F.normalize(img_emb[:, 0], p=2, dim=1)
926
+ mix = self.mix_linear(img_embeddings).unsqueeze(1)
927
+ hidden_states = torch.concat([mix, hidden_states], dim=1)
928
+
929
+ # add hidden states from the last decoder layer
930
+ if output_hidden_states:
931
+ all_hidden_states += (hidden_states,)
932
+
933
+ next_cache = next_decoder_cache if use_cache else None
934
+ if return_legacy_cache:
935
+ next_cache = next_cache.to_legacy_cache()
936
+
937
+ if not return_dict:
938
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
939
+ return BaseModelOutputWithPast(
940
+ last_hidden_state=hidden_states,
941
+ past_key_values=next_cache,
942
+ hidden_states=all_hidden_states,
943
+ attentions=all_self_attns,
944
+ )
945
+
946
+ # Copied from transformers.models.phi3.modeling_phi3.Phi3Model._update_causal_mask
947
+ def _update_causal_mask(
948
+ self,
949
+ attention_mask: torch.Tensor,
950
+ input_tensor: torch.Tensor,
951
+ cache_position: torch.Tensor,
952
+ past_key_values: Cache,
953
+ output_attentions: bool,
954
+ ):
955
+ if self.config._attn_implementation == "flash_attention_2":
956
+ if attention_mask is not None and 0.0 in attention_mask:
957
+ return attention_mask
958
+ return None
959
+
960
+ # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in
961
+ # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail
962
+ # to infer the attention mask.
963
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
964
+ using_static_cache = isinstance(past_key_values, StaticCache)
965
+ using_sliding_window_cache = isinstance(past_key_values, SlidingWindowCache)
966
+
967
+ # When output attentions is True, sdpa implementation's forward method calls the eager implementation's forward
968
+ if (
969
+ self.config._attn_implementation == "sdpa"
970
+ and not (using_static_cache or using_sliding_window_cache)
971
+ and not output_attentions
972
+ ):
973
+ if AttentionMaskConverter._ignore_causal_mask_sdpa(
974
+ attention_mask,
975
+ inputs_embeds=input_tensor,
976
+ past_key_values_length=past_seen_tokens,
977
+ sliding_window=self.config.sliding_window,
978
+ is_training=self.training,
979
+ ):
980
+ return None
981
+
982
+ dtype, device = input_tensor.dtype, input_tensor.device
983
+ min_dtype = torch.finfo(dtype).min
984
+ sequence_length = input_tensor.shape[1]
985
+ # SlidingWindowCache or StaticCache
986
+ if using_sliding_window_cache or using_static_cache:
987
+ target_length = past_key_values.get_max_cache_shape()
988
+ # DynamicCache or no cache
989
+ else:
990
+ target_length = (
991
+ attention_mask.shape[-1]
992
+ if isinstance(attention_mask, torch.Tensor)
993
+ else past_seen_tokens + sequence_length + 1
994
+ )
995
+
996
+ # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
997
+ causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
998
+ attention_mask,
999
+ sequence_length=sequence_length,
1000
+ target_length=target_length,
1001
+ dtype=dtype,
1002
+ device=device,
1003
+ cache_position=cache_position,
1004
+ batch_size=input_tensor.shape[0],
1005
+ config=self.config,
1006
+ past_key_values=past_key_values,
1007
+ )
1008
+
1009
+ if (
1010
+ self.config._attn_implementation == "sdpa"
1011
+ and attention_mask is not None
1012
+ and attention_mask.device.type == "cuda"
1013
+ and not output_attentions
1014
+ ):
1015
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
1016
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1017
+ # Details: https://github.com/pytorch/pytorch/issues/110213
1018
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype)
1019
+
1020
+ return causal_mask
1021
+
1022
+ @staticmethod
1023
+ # Copied from transformers.models.mistral.modeling_mistral.MistralModel._prepare_4d_causal_attention_mask_with_cache_position with Mistral->Qwen2
1024
+ def _prepare_4d_causal_attention_mask_with_cache_position(
1025
+ attention_mask: torch.Tensor,
1026
+ sequence_length: int,
1027
+ target_length: int,
1028
+ dtype: torch.dtype,
1029
+ device: torch.device,
1030
+ cache_position: torch.Tensor,
1031
+ batch_size: int,
1032
+ config: Qwen2Config,
1033
+ past_key_values: Cache,
1034
+ ):
1035
+ """
1036
+ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
1037
+ `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
1038
+
1039
+ Args:
1040
+ attention_mask (`torch.Tensor`):
1041
+ A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`.
1042
+ sequence_length (`int`):
1043
+ The sequence length being processed.
1044
+ target_length (`int`):
1045
+ The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet.
1046
+ dtype (`torch.dtype`):
1047
+ The dtype to use for the 4D attention mask.
1048
+ device (`torch.device`):
1049
+ The device to plcae the 4D attention mask on.
1050
+ cache_position (`torch.Tensor`):
1051
+ Indices depicting the position of the input sequence tokens in the sequence.
1052
+ batch_size (`torch.Tensor`):
1053
+ Batch size.
1054
+ config (`Qwen2Config`):
1055
+ The model's configuration class
1056
+ past_key_values (`Cache`):
1057
+ The cache class that is being used currently to generate
1058
+ """
1059
+ if attention_mask is not None and attention_mask.dim() == 4:
1060
+ # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
1061
+ causal_mask = attention_mask
1062
+ else:
1063
+ min_dtype = torch.finfo(dtype).min
1064
+ causal_mask = torch.full(
1065
+ (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
1066
+ )
1067
+ diagonal_attend_mask = torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
1068
+ if config.sliding_window is not None:
1069
+ # if we have sliding window, we should not attend to tokens beyond sliding window length, so we mask them out also
1070
+ # the check is needed to verify is current checkpoint was trained with sliding window or not
1071
+ if not isinstance(past_key_values, SlidingWindowCache) or sequence_length > target_length:
1072
+ sliding_attend_mask = torch.arange(target_length, device=device) <= (
1073
+ cache_position.reshape(-1, 1) - config.sliding_window
1074
+ )
1075
+ diagonal_attend_mask.bitwise_or_(sliding_attend_mask)
1076
+ causal_mask *= diagonal_attend_mask
1077
+ causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
1078
+ if attention_mask is not None:
1079
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
1080
+ if attention_mask.shape[-1] > target_length:
1081
+ attention_mask = attention_mask[:, :target_length]
1082
+ mask_length = attention_mask.shape[-1]
1083
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
1084
+ padding_mask = padding_mask == 0
1085
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
1086
+ padding_mask, min_dtype
1087
+ )
1088
+ return causal_mask
1089
+
1090
+
1091
+ class Qwen2ForCausalLM(Qwen2PreTrainedModel, GenerationMixin):
1092
+ _tied_weights_keys = ["lm_head.weight"]
1093
+
1094
+ def __init__(self, config):
1095
+ super().__init__(config)
1096
+ self.model = Qwen2NomicVisionModel(config)
1097
+ self.vocab_size = config.vocab_size
1098
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1099
+
1100
+ # Initialize weights and apply final processing
1101
+ self.post_init()
1102
+
1103
+ def get_input_embeddings(self):
1104
+ return self.model.embed_tokens
1105
+
1106
+ def set_input_embeddings(self, value):
1107
+ self.model.embed_tokens = value
1108
+
1109
+ def get_output_embeddings(self):
1110
+ return self.lm_head
1111
+
1112
+ def set_output_embeddings(self, new_embeddings):
1113
+ self.lm_head = new_embeddings
1114
+
1115
+ def set_decoder(self, decoder):
1116
+ self.model = decoder
1117
+
1118
+ def get_decoder(self):
1119
+ return self.model
1120
+
1121
+ @add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING)
1122
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1123
+ def forward(
1124
+ self,
1125
+ input_ids: torch.LongTensor = None,
1126
+ attention_mask: Optional[torch.Tensor] = None,
1127
+ position_ids: Optional[torch.LongTensor] = None,
1128
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1129
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1130
+ labels: Optional[torch.LongTensor] = None,
1131
+ use_cache: Optional[bool] = None,
1132
+ output_attentions: Optional[bool] = None,
1133
+ output_hidden_states: Optional[bool] = None,
1134
+ return_dict: Optional[bool] = None,
1135
+ cache_position: Optional[torch.LongTensor] = None,
1136
+ num_logits_to_keep: int = 0,
1137
+
1138
+ **loss_kwargs,
1139
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1140
+ r"""
1141
+ Args:
1142
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1143
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1144
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1145
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1146
+
1147
+ num_logits_to_keep (`int`, *optional*):
1148
+ Calculate logits for the last `num_logits_to_keep` tokens. If `0`, calculate logits for all
1149
+ `input_ids` (special case). Only last token logits are needed for generation, and calculating them only for that
1150
+ token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
1151
+
1152
+ Returns:
1153
+
1154
+ Example:
1155
+
1156
+ ```python
1157
+ >>> from transformers import AutoTokenizer, Qwen2ForCausalLM
1158
+
1159
+ >>> model = Qwen2ForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
1160
+ >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
1161
+
1162
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
1163
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1164
+
1165
+ >>> # Generate
1166
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1167
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1168
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
1169
+ ```"""
1170
+
1171
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1172
+ output_hidden_states = (
1173
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1174
+ )
1175
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1176
+
1177
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1178
+ outputs = self.model(
1179
+ input_ids=input_ids,
1180
+ attention_mask=attention_mask,
1181
+ position_ids=position_ids,
1182
+ past_key_values=past_key_values,
1183
+ inputs_embeds=inputs_embeds,
1184
+ use_cache=use_cache,
1185
+ output_attentions=output_attentions,
1186
+ output_hidden_states=output_hidden_states,
1187
+ return_dict=return_dict,
1188
+ cache_position=cache_position,
1189
+ )
1190
+
1191
+ hidden_states = outputs[0]
1192
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
1193
+ logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :])
1194
+
1195
+ loss = None
1196
+ if labels is not None:
1197
+ loss = self.loss_function(logits, labels, self.vocab_size, **loss_kwargs)
1198
+
1199
+ if not return_dict:
1200
+ output = (logits,) + outputs[1:]
1201
+ return (loss,) + output if loss is not None else output
1202
+
1203
+ return CausalLMOutputWithPast(
1204
+ loss=loss,
1205
+ logits=logits,
1206
+ past_key_values=outputs.past_key_values,
1207
+ hidden_states=outputs.hidden_states,
1208
+ attentions=outputs.attentions,
1209
+ )
1210
+
1211
+
1212
+ @add_start_docstrings(
1213
+ """
1214
+ The Qwen2 Model transformer with a sequence classification head on top (linear layer).
1215
+
1216
+ [`Qwen2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1217
+ (e.g. GPT-2) do.
1218
+
1219
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1220
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1221
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1222
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1223
+ each row of the batch).
1224
+ """,
1225
+ QWEN2_START_DOCSTRING,
1226
+ )
1227
+ class Qwen2ForSequenceClassification(Qwen2PreTrainedModel):
1228
+ def __init__(self, config):
1229
+ super().__init__(config)
1230
+ self.num_labels = config.num_labels
1231
+ self.model = Qwen2NomicVisionModel(config)
1232
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1233
+
1234
+ # Initialize weights and apply final processing
1235
+ self.post_init()
1236
+
1237
+ def get_input_embeddings(self):
1238
+ return self.model.embed_tokens
1239
+
1240
+ def set_input_embeddings(self, value):
1241
+ self.model.embed_tokens = value
1242
+
1243
+ @add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING)
1244
+ def forward(
1245
+ self,
1246
+ input_ids: torch.LongTensor = None,
1247
+ attention_mask: Optional[torch.Tensor] = None,
1248
+ position_ids: Optional[torch.LongTensor] = None,
1249
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1250
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1251
+ labels: Optional[torch.LongTensor] = None,
1252
+ use_cache: Optional[bool] = None,
1253
+ output_attentions: Optional[bool] = None,
1254
+ output_hidden_states: Optional[bool] = None,
1255
+ return_dict: Optional[bool] = None,
1256
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1257
+ r"""
1258
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1259
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1260
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1261
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1262
+ """
1263
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1264
+
1265
+ transformer_outputs = self.model(
1266
+ input_ids,
1267
+ attention_mask=attention_mask,
1268
+ position_ids=position_ids,
1269
+ past_key_values=past_key_values,
1270
+ inputs_embeds=inputs_embeds,
1271
+ use_cache=use_cache,
1272
+ output_attentions=output_attentions,
1273
+ output_hidden_states=output_hidden_states,
1274
+ return_dict=return_dict,
1275
+ )
1276
+ hidden_states = transformer_outputs[0]
1277
+ logits = self.score(hidden_states)
1278
+
1279
+ if input_ids is not None:
1280
+ batch_size = input_ids.shape[0]
1281
+ else:
1282
+ batch_size = inputs_embeds.shape[0]
1283
+
1284
+ if self.config.pad_token_id is None and batch_size != 1:
1285
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1286
+ if self.config.pad_token_id is None:
1287
+ sequence_lengths = -1
1288
+ else:
1289
+ if input_ids is not None:
1290
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
1291
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
1292
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
1293
+ sequence_lengths = sequence_lengths.to(logits.device)
1294
+ else:
1295
+ sequence_lengths = -1
1296
+
1297
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1298
+
1299
+ loss = None
1300
+ if labels is not None:
1301
+ labels = labels.to(logits.device)
1302
+ if self.config.problem_type is None:
1303
+ if self.num_labels == 1:
1304
+ self.config.problem_type = "regression"
1305
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1306
+ self.config.problem_type = "single_label_classification"
1307
+ else:
1308
+ self.config.problem_type = "multi_label_classification"
1309
+
1310
+ if self.config.problem_type == "regression":
1311
+ loss_fct = MSELoss()
1312
+ if self.num_labels == 1:
1313
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1314
+ else:
1315
+ loss = loss_fct(pooled_logits, labels)
1316
+ elif self.config.problem_type == "single_label_classification":
1317
+ loss_fct = CrossEntropyLoss()
1318
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1319
+ elif self.config.problem_type == "multi_label_classification":
1320
+ loss_fct = BCEWithLogitsLoss()
1321
+ loss = loss_fct(pooled_logits, labels)
1322
+ if not return_dict:
1323
+ output = (pooled_logits,) + transformer_outputs[1:]
1324
+ return ((loss,) + output) if loss is not None else output
1325
+
1326
+ return SequenceClassifierOutputWithPast(
1327
+ loss=loss,
1328
+ logits=pooled_logits,
1329
+ past_key_values=transformer_outputs.past_key_values,
1330
+ hidden_states=transformer_outputs.hidden_states,
1331
+ attentions=transformer_outputs.attentions,
1332
+ )
1333
+
1334
+
1335
+ @add_start_docstrings(
1336
+ """
1337
+ The Qwen2 Model transformer with a token classification head on top (a linear layer on top of the hidden-states
1338
+ output) e.g. for Named-Entity-Recognition (NER) tasks.
1339
+ """,
1340
+ QWEN2_START_DOCSTRING,
1341
+ )
1342
+ # Copied from transformers.models.llama.modeling_llama.LlamaForTokenClassification with Llama->Qwen2, LLAMA->QWEN2
1343
+ class Qwen2ForTokenClassification(Qwen2PreTrainedModel):
1344
+ def __init__(self, config):
1345
+ super().__init__(config)
1346
+ self.num_labels = config.num_labels
1347
+ self.model = Qwen2NomicVisionModel(config)
1348
+ if getattr(config, "classifier_dropout", None) is not None:
1349
+ classifier_dropout = config.classifier_dropout
1350
+ elif getattr(config, "hidden_dropout", None) is not None:
1351
+ classifier_dropout = config.hidden_dropout
1352
+ else:
1353
+ classifier_dropout = 0.1
1354
+ self.dropout = nn.Dropout(classifier_dropout)
1355
+ self.score = nn.Linear(config.hidden_size, config.num_labels)
1356
+
1357
+ # Initialize weights and apply final processing
1358
+ self.post_init()
1359
+
1360
+ def get_input_embeddings(self):
1361
+ return self.model.embed_tokens
1362
+
1363
+ def set_input_embeddings(self, value):
1364
+ self.model.embed_tokens = value
1365
+
1366
+ @add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING)
1367
+ @add_code_sample_docstrings(
1368
+ checkpoint=_CHECKPOINT_FOR_DOC,
1369
+ output_type=TokenClassifierOutput,
1370
+ config_class=_CONFIG_FOR_DOC,
1371
+ )
1372
+ def forward(
1373
+ self,
1374
+ input_ids: Optional[torch.LongTensor] = None,
1375
+ attention_mask: Optional[torch.Tensor] = None,
1376
+ position_ids: Optional[torch.LongTensor] = None,
1377
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1378
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1379
+ labels: Optional[torch.LongTensor] = None,
1380
+ use_cache: Optional[bool] = None,
1381
+ output_attentions: Optional[bool] = None,
1382
+ output_hidden_states: Optional[bool] = None,
1383
+ return_dict: Optional[bool] = None,
1384
+ ) -> Union[Tuple, TokenClassifierOutput]:
1385
+ r"""
1386
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1387
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1388
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1389
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1390
+ """
1391
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1392
+
1393
+ outputs = self.model(
1394
+ input_ids,
1395
+ attention_mask=attention_mask,
1396
+ position_ids=position_ids,
1397
+ past_key_values=past_key_values,
1398
+ inputs_embeds=inputs_embeds,
1399
+ use_cache=use_cache,
1400
+ output_attentions=output_attentions,
1401
+ output_hidden_states=output_hidden_states,
1402
+ return_dict=return_dict,
1403
+ )
1404
+ sequence_output = outputs[0]
1405
+ sequence_output = self.dropout(sequence_output)
1406
+ logits = self.score(sequence_output)
1407
+
1408
+ loss = None
1409
+ if labels is not None:
1410
+ loss = self.loss_function(logits, labels, self.config)
1411
+
1412
+ if not return_dict:
1413
+ output = (logits,) + outputs[2:]
1414
+ return ((loss,) + output) if loss is not None else output
1415
+
1416
+ return TokenClassifierOutput(
1417
+ loss=loss,
1418
+ logits=logits,
1419
+ hidden_states=outputs.hidden_states,
1420
+ attentions=outputs.attentions,
1421
+ )
1422
+
1423
+
1424
+ @add_start_docstrings(
1425
+ """
1426
+ The Qwen2 Model transformer with a span classification head on top for extractive question-answering tasks like
1427
+ SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
1428
+ """,
1429
+ QWEN2_START_DOCSTRING,
1430
+ )
1431
+ # Copied from transformers.models.mistral.modeling_mistral.MistralForQuestionAnswering with Mistral->Qwen2, MISTRAL->QWEN2
1432
+ class Qwen2ForQuestionAnswering(Qwen2PreTrainedModel):
1433
+ base_model_prefix = "model"
1434
+
1435
+ # Copied from models.models.bloom.modeling_bloom.BloomForQuestionAnswering.__init__ with Bloom->Qwen2
1436
+ def __init__(self, config):
1437
+ super().__init__(config)
1438
+ self.model = Qwen2NomicVisionModel(config)
1439
+ self.qa_outputs = nn.Linear(config.hidden_size, 2)
1440
+
1441
+ # Initialize weights and apply final processing
1442
+ self.post_init()
1443
+
1444
+ def get_input_embeddings(self):
1445
+ return self.model.embed_tokens
1446
+
1447
+ def set_input_embeddings(self, value):
1448
+ self.model.embed_tokens = value
1449
+
1450
+ @add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING)
1451
+ def forward(
1452
+ self,
1453
+ input_ids: Optional[torch.LongTensor] = None,
1454
+ attention_mask: Optional[torch.FloatTensor] = None,
1455
+ position_ids: Optional[torch.LongTensor] = None,
1456
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1457
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1458
+ start_positions: Optional[torch.LongTensor] = None,
1459
+ end_positions: Optional[torch.LongTensor] = None,
1460
+ output_attentions: Optional[bool] = None,
1461
+ output_hidden_states: Optional[bool] = None,
1462
+ return_dict: Optional[bool] = None,
1463
+ **kwargs,
1464
+ ) -> Union[Tuple, QuestionAnsweringModelOutput]:
1465
+ r"""
1466
+ start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1467
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1468
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1469
+ are not taken into account for computing the loss.
1470
+ end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1471
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1472
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1473
+ are not taken into account for computing the loss.
1474
+ """
1475
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1476
+
1477
+ outputs = self.model(
1478
+ input_ids,
1479
+ attention_mask=attention_mask,
1480
+ position_ids=position_ids,
1481
+ past_key_values=past_key_values,
1482
+ inputs_embeds=inputs_embeds,
1483
+ output_attentions=output_attentions,
1484
+ output_hidden_states=output_hidden_states,
1485
+ return_dict=return_dict,
1486
+ )
1487
+
1488
+ sequence_output = outputs[0]
1489
+
1490
+ logits = self.qa_outputs(sequence_output)
1491
+ start_logits, end_logits = logits.split(1, dim=-1)
1492
+ start_logits = start_logits.squeeze(-1).contiguous()
1493
+ end_logits = end_logits.squeeze(-1).contiguous()
1494
+
1495
+ loss = None
1496
+ if start_positions is not None and end_positions is not None:
1497
+ loss = self.loss_function(start_logits, end_logits, start_positions, end_positions, **kwargs)
1498
+
1499
+ if not return_dict:
1500
+ output = (start_logits, end_logits) + outputs[2:]
1501
+ return ((loss,) + output) if loss is not None else output
1502
+
1503
+ return QuestionAnsweringModelOutput(
1504
+ loss=loss,
1505
+ start_logits=start_logits,
1506
+ end_logits=end_logits,
1507
+ hidden_states=outputs.hidden_states,
1508
+ attentions=outputs.attentions,
1509
+ )