Search is not available for this dataset
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only.
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it:
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
stringlengths 161
226k
⌀ |
---|
FlauBERT
Overview
The FlauBERT model was proposed in the paper FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le et al. It’s a transformer model pretrained using a masked language
modeling (MLM) objective (like BERT).
The abstract from the paper is the following:
Language models have become a key step to achieve state-of-the art results in many different Natural Language
Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way
to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their
contextualization at the sentence level. This has been widely demonstrated for English using contextualized
representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al.,
2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and
heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for
Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text
classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the
time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation
protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research
community for further reproducible experiments in French NLP.
This model was contributed by formiel. The original code can be found here.
Tips:
Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective).
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FlaubertConfig
class transformers.FlaubertConfig
<
source
>
(
pre_norm = False
layerdrop = 0.0
vocab_size = 30145
emb_dim = 2048
n_layers = 12
n_heads = 16
dropout = 0.1
attention_dropout = 0.1
gelu_activation = True
sinusoidal_embeddings = False
causal = False
asm = False
n_langs = 1
use_lang_emb = True
max_position_embeddings = 512
embed_init_std = 0.02209708691207961
layer_norm_eps = 1e-12
init_std = 0.02
bos_index = 0
eos_index = 1
pad_index = 2
unk_index = 3
mask_index = 5
is_encoder = True
summary_type = 'first'
summary_use_proj = True
summary_activation = None
summary_proj_to_labels = True
summary_first_dropout = 0.1
start_n_top = 5
end_n_top = 5
mask_token_id = 0
lang_id = 0
pad_token_id = 2
bos_token_id = 0
**kwargs
)
Parameters
pre_norm (bool, optional, defaults to False) —
Whether to apply the layer normalization before or after the feed forward layer following the attention in
each layer (Vaswani et al., Tensor2Tensor for Neural Machine Translation. 2018)
layerdrop (float, optional, defaults to 0.0) —
Probability to drop layers during training (Fan et al., Reducing Transformer Depth on Demand with
Structured Dropout. ICLR 2020)
vocab_size (int, optional, defaults to 30145) —
Vocabulary size of the FlauBERT model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling FlaubertModel or TFFlaubertModel.
emb_dim (int, optional, defaults to 2048) —
Dimensionality of the encoder layers and the pooler layer.
n_layer (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention mechanism
gelu_activation (bool, optional, defaults to True) —
Whether or not to use a gelu activation instead of relu.
sinusoidal_embeddings (bool, optional, defaults to False) —
Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings.
causal (bool, optional, defaults to False) —
Whether or not the model should behave in a causal manner. Causal models use a triangular attention mask in
order to only attend to the left-side context instead if a bidirectional context.
asm (bool, optional, defaults to False) —
Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction
layer.
n_langs (int, optional, defaults to 1) —
The number of languages the model handles. Set to 1 for monolingual models.
use_lang_emb (bool, optional, defaults to True) —
Whether to use language embeddings. Some models use additional language embeddings, see the multilingual
models page for information
on how to use them.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
embed_init_std (float, optional, defaults to 2048^-0.5) —
The standard deviation of the truncated_normal_initializer for initializing the embedding matrices.
init_std (int, optional, defaults to 50257) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices except the
embedding matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
bos_index (int, optional, defaults to 0) —
The index of the beginning of sentence token in the vocabulary.
eos_index (int, optional, defaults to 1) —
The index of the end of sentence token in the vocabulary.
pad_index (int, optional, defaults to 2) —
The index of the padding token in the vocabulary.
unk_index (int, optional, defaults to 3) —
The index of the unknown token in the vocabulary.
mask_index (int, optional, defaults to 5) —
The index of the masking token in the vocabulary.
is_encoder(bool, optional, defaults to True) —
Whether or not the initialized model should be a transformer encoder or decoder as seen in Vaswani et al.
summary_type (string, optional, defaults to “first”) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Has to be one of the following options:
"last": Take the last token hidden state (like XLNet).
"first": Take the first token hidden state (like BERT).
"mean": Take the mean of all tokens hidden states.
"cls_index": Supply a Tensor of classification token position (like GPT/GPT-2).
"attn": Not implemented now, use multi-head attention.
summary_use_proj (bool, optional, defaults to True) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Whether or not to add a projection after the vector extraction.
summary_activation (str, optional) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Pass "tanh" for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (bool, optional, defaults to True) —
Used in the sequence classification and multiple choice models.
Whether the projection outputs should have config.num_labels or config.hidden_size classes.
summary_first_dropout (float, optional, defaults to 0.1) —
Used in the sequence classification and multiple choice models.
The dropout ratio to be used after the projection and activation.
start_n_top (int, optional, defaults to 5) —
Used in the SQuAD evaluation script.
end_n_top (int, optional, defaults to 5) —
Used in the SQuAD evaluation script.
mask_token_id (int, optional, defaults to 0) —
Model agnostic parameter to identify masked tokens when generating text in an MLM context.
lang_id (int, optional, defaults to 1) —
The ID of the language used by the model. This parameter is used when generating text in a given language.
This is the configuration class to store the configuration of a FlaubertModel or a TFFlaubertModel. It is
used to instantiate a FlauBERT model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FlauBERT
flaubert/flaubert_base_uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
FlaubertTokenizer
class transformers.FlaubertTokenizer
<
source
>
(
vocab_file
merges_file
do_lowercase = False
unk_token = '<unk>'
bos_token = '<s>'
sep_token = '</s>'
pad_token = '<pad>'
cls_token = '</s>'
mask_token = '<special1>'
additional_special_tokens = ['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>']
lang2id = None
id2lang = None
**kwargs
)
Parameters
vocab_file (str) —
Vocabulary file.
merges_file (str) —
Merges file.
do_lowercase (bool, optional, defaults to False) —
Controls lower casing.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "</s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "<special1>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<special0>","<special1>","<special2>","<special3>","<special4>","<special5>","<special6>","<special7>","<special8>","<special9>"]) —
List of additional special tokens.
lang2id (Dict[str, int], optional) —
Dictionary mapping languages string identifiers to their IDs.
id2lang (Dict[int, str], optional) —
Dictionary mapping language IDs to their string identifiers.
Construct a Flaubert tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:
Moses preprocessing and tokenization.
Normalizing all inputs text.
The arguments special_tokens and the function set_special_tokens, can be used to add additional symbols (like
”classify”) to a vocabulary.
The argument do_lowercase controls lower casing (automatically set for pretrained vocabularies).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
FlaubertModel
class transformers.FlaubertModel
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
lengths: typing.Optional[torch.LongTensor] = None
cache: typing.Union[typing.Dict[str, torch.FloatTensor], NoneType] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaubertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaubertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertModel.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaubertWithLMHeadModel
class transformers.FlaubertWithLMHeadModel
<
source
>
(
config
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Flaubert Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaubertWithLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaubertWithLMHeadModel
import torch
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertWithLMHeadModel.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer("The capital of France is <special1>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <special1>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<special1> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
FlaubertForSequenceClassification
class transformers.FlaubertForSequenceClassification
<
source
>
(
config
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a sequence classification/regression head on top (a linear layer on top of the pooled output)
e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaubertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, FlaubertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = FlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, FlaubertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = FlaubertForSequenceClassification.from_pretrained(
... "flaubert/flaubert_base_cased", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
FlaubertForMultipleChoice
class transformers.FlaubertForMultipleChoice
<
source
>
(
config
*inputs
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaubertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaubertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertForMultipleChoice.from_pretrained("flaubert/flaubert_base_cased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
FlaubertForTokenClassification
class transformers.FlaubertForTokenClassification
<
source
>
(
config
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaubertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaubertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertForTokenClassification.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
FlaubertForQuestionAnsweringSimple
class transformers.FlaubertForQuestionAnsweringSimple
<
source
>
(
config
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding. The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaubertForQuestionAnsweringSimple forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaubertForQuestionAnsweringSimple
import torch
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = FlaubertForQuestionAnsweringSimple.from_pretrained("flaubert/flaubert_base_cased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
FlaubertForQuestionAnswering
class transformers.FlaubertForQuestionAnswering
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
is_impossible: typing.Optional[torch.Tensor] = None
cls_index: typing.Optional[torch.Tensor] = None
p_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.flaubert.modeling_flaubert.FlaubertForQuestionAnsweringOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Returns
transformers.models.flaubert.modeling_flaubert.FlaubertForQuestionAnsweringOutput or tuple(torch.FloatTensor)
A transformers.models.flaubert.modeling_flaubert.FlaubertForQuestionAnsweringOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlaubertConfig) and inputs.
config (FlaubertConfig): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The FlaubertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Base class for outputs of question answering models using a SquadHead.
Example:
Copied
from transformers import XLMTokenizer, XLMForQuestionAnswering
import torch
tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForQuestionAnswering.from_pretrained("xlm-mlm-en-2048")
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(
... 0
... ) # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
TFFlaubertModel
class transformers.TFFlaubertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Flaubert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FlaubertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFlaubertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFlaubertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = TFFlaubertModel.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFFlaubertWithLMHeadModel
class transformers.TFFlaubertWithLMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Flaubert Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.models.flaubert.modeling_tf_flaubert.TFFlaubertWithLMHeadModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.flaubert.modeling_tf_flaubert.TFFlaubertWithLMHeadModelOutput or tuple(tf.Tensor)
A transformers.models.flaubert.modeling_tf_flaubert.TFFlaubertWithLMHeadModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FlaubertConfig) and inputs.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFlaubertWithLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFlaubertWithLMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = TFFlaubertWithLMHeadModel.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFFlaubertForSequenceClassification
class transformers.TFFlaubertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a sequence classification/regression head on top (a linear layer on top of the pooled output)
e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FlaubertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFlaubertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFlaubertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = TFFlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFFlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFFlaubertForMultipleChoice
class transformers.TFFlaubertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FlaubertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFlaubertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFlaubertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = TFFlaubertForMultipleChoice.from_pretrained("flaubert/flaubert_base_cased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFFlaubertForTokenClassification
class transformers.TFFlaubertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FlaubertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFlaubertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFlaubertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = TFFlaubertForTokenClassification.from_pretrained("flaubert/flaubert_base_cased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFFlaubertForQuestionAnsweringSimple
class transformers.TFFlaubertForQuestionAnsweringSimple
<
source
>
(
*args
**kwargs
)
Parameters
config (FlaubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Flaubert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility Indices selected in
[0, ..., input_ids.size(-1)]:
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FlaubertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFlaubertForQuestionAnsweringSimple forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFlaubertForQuestionAnsweringSimple
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased")
model = TFFlaubertForQuestionAnsweringSimple.from_pretrained("flaubert/flaubert_base_cased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←FLAN-UL2
FNet→
FlauBERT
Overview
Documentation resources
FlaubertConfig
FlaubertTokenizer
FlaubertModel
FlaubertWithLMHeadModel
FlaubertForSequenceClassification
FlaubertForMultipleChoice
FlaubertForTokenClassification
FlaubertForQuestionAnsweringSimple
FlaubertForQuestionAnswering
TFFlaubertModel
TFFlaubertWithLMHeadModel
TFFlaubertForSequenceClassification
TFFlaubertForMultipleChoice
TFFlaubertForTokenClassification
TFFlaubertForQuestionAnsweringSimple
|
Wav2Vec2
Overview
The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on
transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks
the speech input in the latent space and solves a contrastive task defined over a quantization of the latent
representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the
clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state
of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and
pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech
recognition with limited amounts of labeled data.
Tips:
Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded
using Wav2Vec2CTCTokenizer.
This model was contributed by patrickvonplaten.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Audio Classification
A notebook on how to leverage a pretrained Wav2Vec2 model for emotion classification. 🌎
Wav2Vec2ForCTC is supported by this example script and notebook.
Audio classification task guide
Automatic Speech Recognition
A blog post on boosting Wav2Vec2 with n-grams in 🤗 Transformers.
A blog post on how to finetune Wav2Vec2 for English ASR with 🤗 Transformers.
A blog post on finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers.
A notebook on how to create YouTube captions from any video by transcribing audio with Wav2Vec2. 🌎
Wav2Vec2ForCTC is supported by a notebook on how to finetune a speech recognition model in English, and how to finetune a speech recognition model in any language.
Automatic speech recognition task guide
🚀 Deploy
A blog post on how to deploy Wav2Vec2 for Automatic Speech Recogntion with Hugging Face’s Transformers & Amazon SageMaker.
Wav2Vec2Config
class transformers.Wav2Vec2Config
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
feat_quantizer_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
do_stable_layer_norm = False
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
num_codevectors_per_group = 320
num_codevector_groups = 2
contrastive_logits_temperature = 0.1
num_negatives = 100
codevector_dim = 256
proj_codevector_dim = 256
diversity_loss_weight = 0.1
ctc_loss_reduction = 'sum'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
tdnn_dim = (512, 512, 512, 512, 1500)
tdnn_kernel = (5, 3, 3, 1, 1)
tdnn_dilation = (1, 2, 3, 1, 1)
xvector_output_dim = 512
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
add_adapter = False
adapter_kernel_size = 3
adapter_stride = 2
num_adapter_layers = 3
output_hidden_size = None
adapter_attn_dim = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling Wav2Vec2Model or TFWav2Vec2Model. Vocabulary size of the
model. Defines the different tokens that can be represented by the inputs_ids passed to the forward
method of Wav2Vec2Model.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of Wav2Vec2ForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for quantized feature encoder states.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (bool, optional, defaults to False) —
Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
num_codevectors_per_group (int, optional, defaults to 320) —
Number of entries in each quantization codebook (group).
num_codevector_groups (int, optional, defaults to 2) —
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (float, optional, defaults to 0.1) —
The temperature kappa in the contrastive loss.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.
num_negatives (int, optional, defaults to 100) —
Number of negative samples for the contrastive loss.
codevector_dim (int, optional, defaults to 256) —
Dimensionality of the quantized feature vectors.
proj_codevector_dim (int, optional, defaults to 256) —
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (int, optional, defaults to 0.1) —
The weight of the codebook diversity loss component.
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of Wav2Vec2ForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of Wav2Vec2ForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of Wav2Vec2ForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 1500)) —
A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN
module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.
tdnn_kernel (Tuple[int] or List[int], optional, defaults to (5, 3, 3, 1, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the
XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.
tdnn_dilation (Tuple[int] or List[int], optional, defaults to (1, 2, 3, 1, 1)) —
A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the
XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.
xvector_output_dim (int, optional, defaults to 512) —
Dimensionality of the XVector embedding vectors.
add_adapter (bool, optional, defaults to False) —
Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for
warm-starting Wav2Vec2 for SpeechEncoderDecoder models.
adapter_kernel_size (int, optional, defaults to 3) —
Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
adapter_stride (int, optional, defaults to 2) —
Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
num_adapter_layers (int, optional, defaults to 3) —
Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True.
adapter_attn_dim (int, optional) —
Dimension of the attention adapter weights to be used in each attention block. An example of a model using
attention adapters is facebook/mms-1b-all.
output_hidden_size (int, optional) —
Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant
if add_adapter is True.
This is the configuration class to store the configuration of a Wav2Vec2Model. It is used to instantiate an
Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Wav2Vec2
facebook/wav2vec2-base-960h architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Wav2Vec2Config, Wav2Vec2Model
# Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration
configuration = Wav2Vec2Config()
# Initializing a model (with random weights) from the facebook/wav2vec2-base-960h style configuration
model = Wav2Vec2Model(configuration)
# Accessing the model configuration
configuration = model.config
Wav2Vec2CTCTokenizer
class transformers.Wav2Vec2CTCTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
word_delimiter_token = '|'
replace_word_delimiter_char = ' '
do_lower_case = False
target_lang = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
bos_token (str, optional, defaults to "<s>") —
The beginning of sentence token.
eos_token (str, optional, defaults to "</s>") —
The end of sentence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
word_delimiter_token (str, optional, defaults to "|") —
The token used for defining the end of a word.
do_lower_case (bool, optional, defaults to False) —
Whether or not to accept lowercase input and lowercase the output when decoding.
target_lang (str, optional) —
A target language the tokenizer should set by default. target_lang has to be defined for multi-lingual,
nested vocabulary such as facebook/mms-1b-all.
**kwargs —
Additional keyword arguments passed along to PreTrainedTokenizer
Constructs a Wav2Vec2CTC tokenizer.
This tokenizer inherits from PreTrainedTokenizer which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
is_split_into_words: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
decode
<
source
>
(
token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
output_char_offsets: bool = False
output_word_offsets: bool = False
**kwargs
)
→
str or Wav2Vec2CTCTokenizerOutput
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces.
output_char_offsets (bool, optional, defaults to False) —
Whether or not to output character offsets. Character offsets can be used in combination with the
sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.
Please take a look at the example below to better understand how to make use of output_char_offsets.
output_word_offsets (bool, optional, defaults to False) —
Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
and model downsampling rate to compute the time-stamps of transcribed words.
Please take a look at the example below to better understand how to make use of output_word_offsets.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
str or Wav2Vec2CTCTokenizerOutput
The list of decoded
sentences. Will be a Wav2Vec2CTCTokenizerOutput when
output_char_offsets == True or output_word_offsets == True.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
Example:
Copied
# Let's see how to retrieve time steps for a model
from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
# import model, feature extractor, tokenizer
model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tokenizer = AutoTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
# load first sample of English common_voice
dataset = load_dataset("common_voice", "en", split="train", streaming=True)
dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
dataset_iter = iter(dataset)
sample = next(dataset_iter)
# forward sample through model to get greedily predicted transcription ids
input_values = feature_extractor(sample["audio"]["array"], return_tensors="pt").input_values
logits = model(input_values).logits[0]
pred_ids = torch.argmax(logits, axis=-1)
# retrieve word stamps (analogous commands for `output_char_offsets`)
outputs = tokenizer.decode(pred_ids, output_word_offsets=True)
# compute `time_offset` in seconds as product of downsampling ratio and sampling_rate
time_offset = model.config.inputs_to_logits_ratio / feature_extractor.sampling_rate
word_offsets = [
... {
... "word": d["word"],
... "start_time": round(d["start_offset"] * time_offset, 2),
... "end_time": round(d["end_offset"] * time_offset, 2),
... }
... for d in outputs.word_offsets
... ]
# compare word offsets with audio `common_voice_en_100038.mp3` online on the dataset viewer:
# https://huggingface.co/datasets/common_voice/viewer/en/train
word_offsets[:3]
[{'word': 'WHY', 'start_time': 1.42, 'end_time': 1.54}, {'word': 'DOES', 'start_time': 1.64, 'end_time': 1.9}, {'word': 'MILISANDRA', 'start_time': 2.26, 'end_time': 2.9}]
batch_decode
<
source
>
(
sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
output_char_offsets: bool = False
output_word_offsets: bool = False
**kwargs
)
→
List[str] or Wav2Vec2CTCTokenizerOutput
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces.
output_char_offsets (bool, optional, defaults to False) —
Whether or not to output character offsets. Character offsets can be used in combination with the
sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.
Please take a look at the Example of decode() to better understand how to make
use of output_char_offsets. batch_decode() works the same way with batched
output.
output_word_offsets (bool, optional, defaults to False) —
Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
and model downsampling rate to compute the time-stamps of transcribed words.
Please take a look at the Example of decode() to better understand how to make
use of output_word_offsets. batch_decode() works the same way with batched
output.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
List[str] or Wav2Vec2CTCTokenizerOutput
The list of decoded
sentences. Will be a Wav2Vec2CTCTokenizerOutput when
output_char_offsets == True or output_word_offsets == True.
Convert a list of lists of token ids into a list of strings by calling decode.
set_target_lang
<
source
>
(
target_lang: str
)
Set the target language of a nested multi-lingual dictionary
Wav2Vec2FeatureExtractor
class transformers.Wav2Vec2FeatureExtractor
<
source
>
(
feature_size = 1
sampling_rate = 16000
padding_value = 0.0
return_attention_mask = False
do_normalize = True
**kwargs
)
Parameters
feature_size (int, defaults to 1) —
The feature dimension of the extracted features.
sampling_rate (int, defaults to 16000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (float, defaults to 0.0) —
The value that is used to fill the padding values.
do_normalize (bool, optional, defaults to True) —
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
improve the performance for some models, e.g.,
wav2vec2-lv60.
return_attention_mask (bool, optional, defaults to False) —
Whether or not call() should return attention_mask.
Wav2Vec2 models that have set config.feat_extract_norm == "group", such as
wav2vec2-base, have not been trained using
attention_mask. For such models, input_values should simply be padded with 0 and no attention_mask
should be passed.
For Wav2Vec2 models that have set config.feat_extract_norm == "layer", such as
wav2vec2-lv60, attention_mask should be
passed for batched inference.
Constructs a Wav2Vec2 feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
max_length: typing.Optional[int] = None
truncation: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_attention_mask: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
sampling_rate: typing.Optional[int] = None
**kwargs
)
Parameters
raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
max_length (int, optional) —
Maximum length of the returned list and optionally padding length (see above).
truncation (bool) —
Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor’s default.
What are attention masks?
Wav2Vec2 models that have set config.feat_extract_norm == "group", such as
wav2vec2-base, have not been trained using
attention_mask. For such models, input_values should simply be padded with 0 and no
attention_mask should be passed.
For Wav2Vec2 models that have set config.feat_extract_norm == "layer", such as
wav2vec2-lv60, attention_mask should
be passed for batched inference.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
sampling_rate (int, optional) —
The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors.
padding_value (float, defaults to 0.0) —
Main method to featurize and prepare for the model one or several sequence(s).
Wav2Vec2Processor
class transformers.Wav2Vec2Processor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (Wav2Vec2FeatureExtractor) —
An instance of Wav2Vec2FeatureExtractor. The feature extractor is a required input.
tokenizer (PreTrainedTokenizer) —
An instance of PreTrainedTokenizer. The tokenizer is a required input.
Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single
processor.
Wav2Vec2Processor offers all the functionalities of Wav2Vec2FeatureExtractor and PreTrainedTokenizer.
See the docstring of call() and decode() for more information.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s
call() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to PreTrainedTokenizer’s
call(). Please refer to the docstring of the above two methods for more information.
pad
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s
pad() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to PreTrainedTokenizer’s
pad(). Please refer to the docstring of the above two methods for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path
**kwargs
)
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to PreTrainedTokenizer’s decode(). Please refer
to the docstring of this method for more information.
Wav2Vec2ProcessorWithLM
class transformers.Wav2Vec2ProcessorWithLM
<
source
>
(
feature_extractor: FeatureExtractionMixin
tokenizer: PreTrainedTokenizerBase
decoder: BeamSearchDecoderCTC
)
Parameters
feature_extractor (Wav2Vec2FeatureExtractor) —
An instance of Wav2Vec2FeatureExtractor. The feature extractor is a required input.
tokenizer (Wav2Vec2CTCTokenizer) —
An instance of Wav2Vec2CTCTokenizer. The tokenizer is a required input.
decoder (pyctcdecode.BeamSearchDecoderCTC) —
An instance of pyctcdecode.BeamSearchDecoderCTC. The decoder is a required input.
Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor, a Wav2Vec2 CTC tokenizer and a decoder
with language model support into a single processor for language model boosted speech recognition decoding.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s
call() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to
Wav2Vec2CTCTokenizer’s call(). Please refer to the docstring of the above two
methods for more information.
pad
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor’s
pad() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to
Wav2Vec2CTCTokenizer’s pad(). Please refer to the docstring of the above two methods
for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both SequenceFeatureExtractor and
PreTrainedTokenizer
Instantiate a Wav2Vec2ProcessorWithLM from a pretrained Wav2Vec2 processor.
This class method is simply calling Wav2Vec2FeatureExtractor’s
from_pretrained(), Wav2Vec2CTCTokenizer’s
from_pretrained(), and
pyctcdecode.BeamSearchDecoderCTC.load_from_hf_hub.
Please refer to the docstrings of the methods above for more information.
save_pretrained
<
source
>
(
save_directory
)
batch_decode
<
source
>
(
logits: ndarray
pool: typing.Union[<bound method BaseContext.Pool of <multiprocessing.context.DefaultContext object at 0x7f6d974b31c0>>, NoneType] = None
num_processes: typing.Optional[int] = None
beam_width: typing.Optional[int] = None
beam_prune_logp: typing.Optional[float] = None
token_min_logp: typing.Optional[float] = None
hotwords: typing.Optional[typing.Iterable[str]] = None
hotword_weight: typing.Optional[float] = None
alpha: typing.Optional[float] = None
beta: typing.Optional[float] = None
unk_score_offset: typing.Optional[float] = None
lm_score_boundary: typing.Optional[bool] = None
output_word_offsets: bool = False
n_best: int = 1
)
Parameters
logits (np.ndarray) —
The logits output vector of the model representing the log probabilities for each token.
pool (multiprocessing.Pool, optional) —
An optional user-managed pool. If not set, one will be automatically created and closed. The pool
should be instantiated after Wav2Vec2ProcessorWithLM. Otherwise, the LM won’t be available to the
pool’s sub-processes.
Currently, only pools created with a ‘fork’ context can be used. If a ‘spawn’ pool is passed, it will
be ignored and sequential decoding will be used instead.
num_processes (int, optional) —
If pool is not set, number of processes on which the function should be parallelized over. Defaults
to the number of available CPUs.
beam_width (int, optional) —
Maximum number of beams at each step in decoding. Defaults to pyctcdecode’s DEFAULT_BEAM_WIDTH.
beam_prune_logp (int, optional) —
Beams that are much worse than best beam will be pruned Defaults to pyctcdecode’s DEFAULT_PRUNE_LOGP.
token_min_logp (int, optional) —
Tokens below this logp are skipped unless they are argmax of frame Defaults to pyctcdecode’s
DEFAULT_MIN_TOKEN_LOGP.
hotwords (List[str], optional) —
List of words with extra importance, can be OOV for LM
hotword_weight (int, optional) —
Weight factor for hotword importance Defaults to pyctcdecode’s DEFAULT_HOTWORD_WEIGHT.
alpha (float, optional) —
Weight for language model during shallow fusion
beta (float, optional) —
Weight for length score adjustment of during scoring
unk_score_offset (float, optional) —
Amount of log score offset for unknown tokens
lm_score_boundary (bool, optional) —
Whether to have kenlm respect boundaries when scoring
output_word_offsets (bool, optional, defaults to False) —
Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
and model downsampling rate to compute the time-stamps of transcribed words.
n_best (int, optional, defaults to 1) —
Number of best hypotheses to return. If n_best is greater than 1, the returned text will be a list
of lists of strings, logit_score will be a list of lists of floats, and lm_score will be a list of
lists of floats, where the length of the outer list will correspond to the batch size and the length of
the inner list will correspond to the number of returned hypotheses . The value should be >= 1.
Please take a look at the Example of decode() to better understand how to
make use of output_word_offsets. batch_decode() works the same way with
batched output.
Batch decode output logits to audio transcription with language model support.
This function makes use of Python’s multiprocessing. Currently, multiprocessing is available only on Unix
systems (see this issue).
If you are decoding multiple batches, consider creating a Pool and passing it to batch_decode. Otherwise,
batch_decode will be very slow since it will create a fresh Pool for each call. See usage example below.
Example:
See Decoding multiple audios.
decode
<
source
>
(
logits: ndarray
beam_width: typing.Optional[int] = None
beam_prune_logp: typing.Optional[float] = None
token_min_logp: typing.Optional[float] = None
hotwords: typing.Optional[typing.Iterable[str]] = None
hotword_weight: typing.Optional[float] = None
alpha: typing.Optional[float] = None
beta: typing.Optional[float] = None
unk_score_offset: typing.Optional[float] = None
lm_score_boundary: typing.Optional[bool] = None
output_word_offsets: bool = False
n_best: int = 1
)
Parameters
logits (np.ndarray) —
The logits output vector of the model representing the log probabilities for each token.
beam_width (int, optional) —
Maximum number of beams at each step in decoding. Defaults to pyctcdecode’s DEFAULT_BEAM_WIDTH.
beam_prune_logp (int, optional) —
A threshold to prune beams with log-probs less than best_beam_logp + beam_prune_logp. The value should
be <= 0. Defaults to pyctcdecode’s DEFAULT_PRUNE_LOGP.
token_min_logp (int, optional) —
Tokens with log-probs below token_min_logp are skipped unless they are have the maximum log-prob for an
utterance. Defaults to pyctcdecode’s DEFAULT_MIN_TOKEN_LOGP.
hotwords (List[str], optional) —
List of words with extra importance which can be missing from the LM’s vocabulary, e.g. [“huggingface”]
hotword_weight (int, optional) —
Weight multiplier that boosts hotword scores. Defaults to pyctcdecode’s DEFAULT_HOTWORD_WEIGHT.
alpha (float, optional) —
Weight for language model during shallow fusion
beta (float, optional) —
Weight for length score adjustment of during scoring
unk_score_offset (float, optional) —
Amount of log score offset for unknown tokens
lm_score_boundary (bool, optional) —
Whether to have kenlm respect boundaries when scoring
output_word_offsets (bool, optional, defaults to False) —
Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
and model downsampling rate to compute the time-stamps of transcribed words.
n_best (int, optional, defaults to 1) —
Number of best hypotheses to return. If n_best is greater than 1, the returned text will be a list
of strings, logit_score will be a list of floats, and lm_score will be a list of floats, where the
length of these lists will correspond to the number of returned hypotheses. The value should be >= 1.
Please take a look at the example below to better understand how to make use of output_word_offsets.
Decode output logits to audio transcription with language model support.
Example:
Copied
# Let's see how to retrieve time steps for a model
from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
# import model, feature extractor, tokenizer
model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
# load first sample of English common_voice
dataset = load_dataset("common_voice", "en", split="train", streaming=True)
dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
dataset_iter = iter(dataset)
sample = next(dataset_iter)
# forward sample through model to get greedily predicted transcription ids
input_values = processor(sample["audio"]["array"], return_tensors="pt").input_values
with torch.no_grad():
... logits = model(input_values).logits[0].cpu().numpy()
# retrieve word stamps (analogous commands for `output_char_offsets`)
outputs = processor.decode(logits, output_word_offsets=True)
# compute `time_offset` in seconds as product of downsampling ratio and sampling_rate
time_offset = model.config.inputs_to_logits_ratio / processor.feature_extractor.sampling_rate
word_offsets = [
... {
... "word": d["word"],
... "start_time": round(d["start_offset"] * time_offset, 2),
... "end_time": round(d["end_offset"] * time_offset, 2),
... }
... for d in outputs.word_offsets
... ]
# compare word offsets with audio `common_voice_en_100038.mp3` online on the dataset viewer:
# https://huggingface.co/datasets/common_voice/viewer/en/train
word_offsets[:4]
[{'word': 'WHY', 'start_time': 1.42, 'end_time': 1.54}, {'word': 'DOES', 'start_time': 1.66, 'end_time': 1.9}, {'word': 'MILISANDRA', 'start_time': 2.26, 'end_time': 2.9}, {'word': 'LOOK', 'start_time': 3.0, 'end_time': 3.16}]
Decoding multiple audios
If you are planning to decode multiple batches of audios, you should consider using batch_decode() and passing an instantiated multiprocessing.Pool.
Otherwise, batch_decode() performance will be slower than calling decode() for each audio individually, as it internally instantiates a new Pool for every call. See the example below:
Copied
# Let's see how to use a user-managed pool for batch decoding multiple audios
from multiprocessing import get_context
from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
# import model, feature extractor, tokenizer
model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda")
processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
# load example dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
def map_to_array(batch):
... batch["speech"] = batch["audio"]["array"]
... return batch
# prepare speech data for batch inference
dataset = dataset.map(map_to_array, remove_columns=["audio"])
def map_to_pred(batch, pool):
... inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt")
... inputs = {k: v.to("cuda") for k, v in inputs.items()}
... with torch.no_grad():
... logits = model(**inputs).logits
... transcription = processor.batch_decode(logits.cpu().numpy(), pool).text
... batch["transcription"] = transcription
... return batch
# note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`.
# otherwise, the LM won't be available to the pool's sub-processes
# select number of processes and batch_size based on number of CPU cores available and on dataset size
with get_context("fork").Pool(processes=2) as pool:
... result = dataset.map(
... map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"]
... )
result["transcription"][:2]
['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"]
Wav2Vec2 specific outputs
class transformers.models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
<
source
>
(
text: typing.Union[typing.List[typing.List[str]], typing.List[str], str]
logit_score: typing.Union[typing.List[typing.List[float]], typing.List[float], float] = None
lm_score: typing.Union[typing.List[typing.List[float]], typing.List[float], float] = None
word_offsets: typing.Union[typing.List[typing.List[typing.List[typing.Dict[str, typing.Union[int, str]]]]], typing.List[typing.List[typing.Dict[str, typing.Union[int, str]]]], typing.List[typing.Dict[str, typing.Union[int, str]]]] = None
)
Parameters
text (list of str or str) —
Decoded logits in text from. Usually the speech transcription.
logit_score (list of float or float) —
Total logit score of the beams associated with produced text.
lm_score (list of float) —
Fused lm_score of the beams associated with produced text.
word_offsets (list of List[Dict[str, Union[int, str]]] or List[Dict[str, Union[int, str]]]) —
Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets
can be used to compute time stamps for each word.
Output type of Wav2Vec2DecoderWithLM, with transcription.
class transformers.modeling_outputs.Wav2Vec2BaseModelOutput
<
source
>
(
last_hidden_state: FloatTensor = None
extract_features: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) —
Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for models that have been trained with the Wav2Vec2 loss objective.
class transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
projected_states: FloatTensor = None
projected_quantized_states: FloatTensor = None
codevector_perplexity: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
contrastive_loss: typing.Optional[torch.FloatTensor] = None
diversity_loss: typing.Optional[torch.FloatTensor] = None
)
Parameters
loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
contrastive_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) —
The contrastive loss (L_m) as stated in the official paper .
diversity_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) —
The diversity loss (L_d) as stated in the official paper .
Output type of Wav2Vec2ForPreTraining, with potential hidden states and attentions.
class transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput
<
source
>
(
last_hidden_state: Array = None
extract_features: Array = None
hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None
attentions: typing.Optional[typing.Tuple[jax.Array]] = None
)
Parameters
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
extract_features (jnp.ndarray of shape (batch_size, sequence_length, last_conv_dim)) —
Sequence of extracted feature vectors of the last convolutional layer of the model with last_conv_dim
being the dimension of the last convolutional layer.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of FlaxWav2Vec2BaseModelOutput, with potential hidden states and attentions.
replace
<
source
>
(
**updates
)
“Returns a new object replacing the specified fields with new values.
class transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput
<
source
>
(
projected_states: Array = None
projected_quantized_states: Array = None
codevector_perplexity: Array = None
hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None
attentions: typing.Optional[typing.Tuple[jax.Array]] = None
)
Parameters
loss (optional, returned when model is in train mode, jnp.ndarray of shape (1,)) —
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (jnp.ndarray of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (jnp.ndarray of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of FlaxWav2Vec2ForPreTrainingOutput, with potential hidden states and attentions.
replace
<
source
>
(
**updates
)
“Returns a new object replacing the specified fields with new values.
Wav2Vec2Model
class transformers.Wav2Vec2Model
<
source
>
(
config: Wav2Vec2Config
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Wav2Vec2Model
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 768]
Wav2Vec2ForCTC
class transformers.Wav2Vec2ForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
53.48
load_adapter
<
source
>
(
target_lang: str
force_load = True
**kwargs
)
Parameters
target_lang (str) —
Has to be a language id of an existing adapter weight. Adapter weights are stored in the format
adapter..safetensors or adapter..bin
force_load (bool, defaults to True) —
Whether the weights shall be loaded even if target_lang matches self.target_lang.
cache_dir (Union[str, os.PathLike], optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
force_download (bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
resume_download (bool, optional, defaults to False) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
proxies (Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
local_files_only(bool, optional, defaults to False) —
Whether or not to only look at local files (i.e., do not try to download the model).
use_auth_token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use
the token generated when running huggingface-cli login (stored in ~/.huggingface).
revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision can be any
identifier allowed by git.
To test a pull request you made on the Hub, you can pass `revision=“refs/pr/“.
mirror (str, optional) —
Mirror source to accelerate downloads in China. If you are from China and have an accessibility
problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
Please refer to the mirror site for more information.
Load a language adapter model from a pre-trained adapter model.
Activate the special “offline-mode” to
use this method in a firewalled environment.
Examples:
Copied
from transformers import Wav2Vec2ForCTC, AutoProcessor
ckpt = "facebook/mms-1b-all"
processor = AutoProcessor.from_pretrained(ckpt)
model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang="eng")
# set specific language
processor.tokenizer.set_target_lang("spa")
model.load_adapter("spa")
Wav2Vec2ForSequenceClassification
class transformers.Wav2Vec2ForSequenceClassification
<
source
>
(
config
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2 Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Wav2Vec2ForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("superb/wav2vec2-base-superb-ks")
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ks")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'_unknown_'
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
round(loss.item(), 2)
6.54
Wav2Vec2ForAudioFrameClassification
class transformers.Wav2Vec2ForAudioFrameClassification
<
source
>
(
config
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2 Model with a frame classification head on top for tasks like Speaker Diarization.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ForAudioFrameClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Wav2Vec2ForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/wav2vec2-base-superb-sd")
model = Wav2Vec2ForAudioFrameClassification.from_pretrained("anton-l/wav2vec2-base-superb-sd")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
with torch.no_grad():
... logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
labels[0].tolist()
[0, 0]
Wav2Vec2ForXVector
class transformers.Wav2Vec2ForXVector
<
source
>
(
config
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2 Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.XVectorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax.
embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ForXVector forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Wav2Vec2ForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/wav2vec2-base-superb-sv")
model = Wav2Vec2ForXVector.from_pretrained("anton-l/wav2vec2-base-superb-sv")
# audio file is decoded on the fly
inputs = feature_extractor(
... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
with torch.no_grad():
... embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.7 # the optimal threshold is dataset-dependent
if similarity < threshold:
... print("Speakers are not the same!")
round(similarity.item(), 2)
0.98
Wav2Vec2ForPreTraining
class transformers.Wav2Vec2ForPreTraining
<
source
>
(
config: Wav2Vec2Config
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2 Model with a quantizer and VQ head on top.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.BoolTensor] = None
sampled_negative_indices: typing.Optional[torch.BoolTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
mask_time_indices (torch.BoolTensor of shape (batch_size, sequence_length), optional) —
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in config.proj_codevector_dim space.
sampled_negative_indices (torch.BoolTensor of shape (batch_size, sequence_length, num_negatives), optional) —
Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss.
Required input for pre-training.
Returns
transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2Config) and inputs.
loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
contrastive_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) — The contrastive loss (L_m) as stated in the official paper .
diversity_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) — The diversity loss (L_d) as stated in the official paper .
The Wav2Vec2ForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices
from datasets import load_dataset
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# compute masked indices
batch_size, raw_sequence_length = input_values.shape
sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()
mask_time_indices = _compute_mask_indices(
... shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2
... )
sampled_negative_indices = _sample_negative_indices(
... features_shape=(batch_size, sequence_length),
... num_negatives=model.config.num_negatives,
... mask_time_indices=mask_time_indices,
... )
mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long)
sampled_negative_indices = torch.tensor(
... data=sampled_negative_indices, device=input_values.device, dtype=torch.long
... )
with torch.no_grad():
... outputs = model(input_values, mask_time_indices=mask_time_indices)
# compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)
# show that cosine similarity is much higher than random
cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5
tensor(True)
# for contrastive loss training model should be put into train mode
model = model.train()
loss = model(
... input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices
... ).loss
TFWav2Vec2Model
class transformers.TFWav2Vec2Model
<
source
>
(
*args
**kwargs
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare TFWav2Vec2 Model transformer outputing raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_values only and nothing else: model(input_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_values, attention_mask]) or model([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_values": input_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_values: tf.Tensor
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_values you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Wav2Vec2Config) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFWav2Vec2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, TFWav2Vec2Model
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
model = TFWav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="tf").input_values # Batch size 1
hidden_states = model(input_values).last_hidden_state
TFWav2Vec2ForSequenceClassification
class transformers.TFWav2Vec2ForSequenceClassification
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_values: tf.Tensor
attention_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
TFWav2Vec2ForCTC
class transformers.TFWav2Vec2ForCTC
<
source
>
(
*args
**kwargs
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
TFWav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_values only and nothing else: model(input_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_values, attention_mask]) or model([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_values": input_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_values: tf.Tensor
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
labels: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
Parameters
input_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_values you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_values docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Wav2Vec2Config) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFWav2Vec2ForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import AutoProcessor, TFWav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="tf").input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = tf.argmax(logits, axis=-1)
transcription = processor.decode(predicted_ids[0])
# compute loss
target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"
# Pass transcription as `text` to encode labels
labels = processor(text=transcription, return_tensors="tf").input_ids
loss = model(input_values, labels=labels).loss
FlaxWav2Vec2Model
class transformers.FlaxWav2Vec2Model
<
source
>
(
config: Wav2Vec2Config
input_shape: typing.Tuple = (1, 1024)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Wav2Vec2 Model transformer outputting raw hidden-states without any specific head on top.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_values
attention_mask = None
mask_time_indices = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
freeze_feature_encoder: bool = False
return_dict: typing.Optional[bool] = None
)
→
transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (jnp.ndarray of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type jnp.ndarray. See Wav2Vec2Processor.call() for details.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? .. warning:: attention_mask should only be passed
if the corresponding processor has config.return_attention_mask == True. For all models whose processor
has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
mask_time_indices (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in config.proj_codevector_dim space.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (jnp.ndarray of shape (batch_size, sequence_length, last_conv_dim)) — Sequence of extracted feature vectors of the last convolutional layer of the model with last_conv_dim
being the dimension of the last convolutional layer.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxWav2Vec2PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, FlaxWav2Vec2Model
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-lv60")
model = FlaxWav2Vec2Model.from_pretrained("facebook/wav2vec2-large-lv60")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(
... ds["speech"][0], sampling_rate=16_000, return_tensors="np"
... ).input_values # Batch size 1
hidden_states = model(input_values).last_hidden_state
FlaxWav2Vec2ForCTC
class transformers.FlaxWav2Vec2ForCTC
<
source
>
(
config: Wav2Vec2Config
input_shape: typing.Tuple = (1, 1024)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_values
attention_mask = None
mask_time_indices = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
freeze_feature_encoder: bool = False
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (jnp.ndarray of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type jnp.ndarray. See Wav2Vec2Processor.call() for details.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? .. warning:: attention_mask should only be passed
if the corresponding processor has config.return_attention_mask == True. For all models whose processor
has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
mask_time_indices (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in config.proj_codevector_dim space.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxWav2Vec2PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoProcessor, FlaxWav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-large-960h-lv60")
model = FlaxWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(
... ds["speech"][0], sampling_rate=16_000, return_tensors="np"
... ).input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = jnp.argmax(logits, axis=-1)
transcription = processor.decode(predicted_ids[0])
# should give: "A MAN SAID TO THE UNIVERSE SIR I EXIST"
FlaxWav2Vec2ForPreTraining
class transformers.FlaxWav2Vec2ForPreTraining
<
source
>
(
config: Wav2Vec2Config
input_shape: typing.Tuple = (1, 1024)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (Wav2Vec2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Wav2Vec2 Model with a quantizer and VQ head on top.
Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_values
attention_mask = None
mask_time_indices = None
gumbel_temperature: int = 1
params: dict = None
dropout_rng: PRNGKey = None
gumbel_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
freeze_feature_encoder: bool = False
return_dict: typing.Optional[bool] = None
)
→
transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_values (jnp.ndarray of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type jnp.ndarray. See Wav2Vec2Processor.call() for details.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks? .. warning:: attention_mask should only be passed
if the corresponding processor has config.return_attention_mask == True. For all models whose processor
has config.return_attention_mask == False, such as
wav2vec2-base, attention_mask should not be
passed to avoid degraded performance when doing batched inference. For such models input_values should
simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly
different results depending on whether input_values is padded or not.
mask_time_indices (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in config.proj_codevector_dim space.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>) and inputs.
loss (optional, returned when model is in train mode, jnp.ndarray of shape (1,)) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (jnp.ndarray of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (jnp.ndarray of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxWav2Vec2ForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import optax
import numpy as np
import jax.numpy as jnp
from transformers import AutoFeatureExtractor, FlaxWav2Vec2ForPreTraining
from transformers.models.wav2vec2.modeling_flax_wav2vec2 import _compute_mask_indices
from datasets import load_dataset
import soundfile as sf
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-large-lv60")
model = FlaxWav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-large-lv60")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = feature_extractor(ds["speech"][0], return_tensors="np").input_values # Batch size 1
# compute masked indices
batch_size, raw_sequence_length = input_values.shape
sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2)
outputs = model(input_values, mask_time_indices=mask_time_indices)
# compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
cosine_sim = optax.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states)
# show that cosine similarity is much higher than random
assert np.asarray(cosine_sim)[mask_time_indices].mean() > 0.5
←UniSpeech-SAT
Wav2Vec2-Conformer→
Wav2Vec2
Overview
Resources
Wav2Vec2Config
Wav2Vec2CTCTokenizer
Wav2Vec2FeatureExtractor
Wav2Vec2Processor
Wav2Vec2ProcessorWithLM
Decoding multiple audios
Wav2Vec2 specific outputs
Wav2Vec2Model
Wav2Vec2ForCTC
Wav2Vec2ForSequenceClassification
Wav2Vec2ForAudioFrameClassification
Wav2Vec2ForXVector
Wav2Vec2ForPreTraining
TFWav2Vec2Model
TFWav2Vec2ForSequenceClassification
TFWav2Vec2ForCTC
FlaxWav2Vec2Model
FlaxWav2Vec2ForCTC
FlaxWav2Vec2ForPreTraining
|
BigBirdPegasus
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
Tips:
For an in-detail explanation on how BigBird’s attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn’t support num_random_blocks = 0.
BigBirdPegasus uses the PegasusTokenizer.
BigBird is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
The original code can be found here.
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Translation task guide
Summarization task guide
BigBirdPegasusConfig
class transformers.BigBirdPegasusConfig
<
source
>
(
vocab_size = 96103
max_position_embeddings = 4096
encoder_layers = 16
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 16
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu_new'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 2
classifier_dropout = 0.0
scale_embedding = True
pad_token_id = 0
bos_token_id = 2
eos_token_id = 1
attention_type = 'block_sparse'
block_size = 64
num_random_blocks = 3
use_bias = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 96103) —
Vocabulary size of the BigBirdPegasus model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling BigBirdPegasusModel.
d_model (int, optional, defaults to 1024) —
Dimension of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 16) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 16) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 4096) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 1024 or 2048 or 4096).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
attention_type (str, optional, defaults to "block_sparse") —
Whether to use block sparse attention (with n complexity) as introduced in paper or original attention
layer (with n^2 complexity) in encoder. Possible values are "original_full" and "block_sparse".
use_bias (bool, optional, defaults to False) —
Whether to use bias in query, key, value.
block_size (int, optional, defaults to 64) —
Size of each block. Useful only when attention_type == "block_sparse".
num_random_blocks (int, optional, defaults to 3) —
Each query is going to attend these many number of random blocks. Useful only when attention_type == "block_sparse".
scale_embeddings (bool, optional, defaults to True) —
Whether to rescale embeddings with (hidden_size ** 0.5).
This is the configuration class to store the configuration of a BigBirdPegasusModel. It is used to instantiate
an BigBirdPegasus model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the BigBirdPegasus
google/bigbird-pegasus-large-arxiv architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BigBirdPegasusConfig, BigBirdPegasusModel
# Initializing a BigBirdPegasus bigbird-pegasus-base style configuration
configuration = BigBirdPegasusConfig()
# Initializing a model (with random weights) from the bigbird-pegasus-base style configuration
model = BigBirdPegasusModel(configuration)
# Accessing the model configuration
configuration = model.config
BigBirdPegasusModel
class transformers.BigBirdPegasusModel
<
source
>
(
config: BigBirdPegasusConfig
)
Parameters
config (BigBirdPegasusConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare BigBirdPegasus Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for translation and summarization training. By default, the model will create this tensor by
shifting the input_ids to the right, following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in
the paper for more information on the default strategy.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdPegasusConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BigBirdPegasusModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BigBirdPegasusModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
model = BigBirdPegasusModel.from_pretrained("google/bigbird-pegasus-large-arxiv")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BigBirdPegasusForConditionalGeneration
class transformers.BigBirdPegasusForConditionalGeneration
<
source
>
(
config: BigBirdPegasusConfig
)
Parameters
config (BigBirdPegasusConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The BigBirdPegasus Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for translation and summarization training. By default, the model will create this tensor by
shifting the input_ids to the right, following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in
the paper for more information on the default strategy.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdPegasusConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BigBirdPegasusForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, BigBirdPegasusForConditionalGeneration
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv")
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
ARTICLE_TO_SUMMARIZE = (
... "The dominant sequence transduction models are based on complex recurrent or convolutional neural "
... "networks in an encoder-decoder configuration. The best performing models also connect the encoder "
... "and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, "
... "based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. "
... "Experiments on two machine translation tasks show these models to be superior in quality "
... "while being more parallelizable and requiring significantly less time to train."
... )
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=4096, return_tensors="pt", truncation=True)
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=15)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'dominant sequence models are based on recurrent or convolutional neural networks .'
BigBirdPegasusForSequenceClassification
class transformers.BigBirdPegasusForSequenceClassification
<
source
>
(
config: BigBirdPegasusConfig
**kwargs
)
Parameters
config (BigBirdPegasusConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
BigBirdPegasus model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g.
for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for translation and summarization training. By default, the model will create this tensor by
shifting the input_ids to the right, following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in
the paper for more information on the default strategy.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdPegasusConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BigBirdPegasusForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, BigBirdPegasusForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
model = BigBirdPegasusForSequenceClassification.from_pretrained("google/bigbird-pegasus-large-arxiv")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BigBirdPegasusForSequenceClassification.from_pretrained("google/bigbird-pegasus-large-arxiv", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, BigBirdPegasusForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
model = BigBirdPegasusForSequenceClassification.from_pretrained("google/bigbird-pegasus-large-arxiv", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BigBirdPegasusForSequenceClassification.from_pretrained(
... "google/bigbird-pegasus-large-arxiv", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
BigBirdPegasusForQuestionAnswering
class transformers.BigBirdPegasusForQuestionAnswering
<
source
>
(
config
)
Parameters
config (BigBirdPegasusConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
BigBirdPegasus Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: Tensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for translation and summarization training. By default, the model will create this tensor by
shifting the input_ids to the right, following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in
the paper for more information on the default strategy.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdPegasusConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BigBirdPegasusForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BigBirdPegasusForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
model = BigBirdPegasusForQuestionAnswering.from_pretrained("google/bigbird-pegasus-large-arxiv")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
BigBirdPegasusForCausalLM
class transformers.BigBirdPegasusForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdPegasusConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, BigBirdPegasusForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
model = BigBirdPegasusForCausalLM.from_pretrained(
... "google/bigbird-pegasus-large-arxiv", add_cross_attention=False
... )
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
←BigBird
BioGpt→
BigBirdPegasus
Overview
Documentation resources
BigBirdPegasusConfig
BigBirdPegasusModel
BigBirdPegasusForConditionalGeneration
BigBirdPegasusForSequenceClassification
BigBirdPegasusForQuestionAnswering
BigBirdPegasusForCausalLM
|
DialoGPT
Overview
DialoGPT was proposed in DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao,
Jianfeng Gao, Jingjing Liu, Bill Dolan. It’s a GPT2 Model trained on 147M conversation-like exchanges extracted from
Reddit.
The abstract from the paper is the following:
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained
transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning
from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human
both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems
that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline
systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response
generation and the development of more intelligent open-domain dialogue systems.
Tips:
DialoGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather
than the left.
DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful
at response generation in open-domain dialogue systems.
DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on DialoGPT’s model card.
Training:
In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: We
follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language
modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,…, x_N (N is the
sequence length), ended by the end-of-text token. For more information please confer to the original paper.
DialoGPT’s architecture is based on the GPT2 model, so one can refer to GPT2’s documentation page.
The original code can be found here.
←DeBERTa-v2
DistilBERT→
DialoGPT
Overview
|
GIT
Overview
The GIT model was proposed in GIT: A Generative Image-to-text Transformer for Vision and Language by
Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. GIT is a decoder-only Transformer
that leverages CLIP’s vision encoder to condition the model on vision inputs besides text. The model obtains state-of-the-art results on
image captioning and visual question answering benchmarks.
The abstract from the paper is the following:
In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks.
Tips:
GIT is implemented in a very similar way to GPT-2, the only difference being that the model is also conditioned on pixel_values.
One can use GitProcessor to prepare images for the model, and the generate method for autoregressive generation.
GIT architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT.
Demo notebooks regarding inference + fine-tuning GIT on custom data can be found here.
See also: Causal language modeling task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
GitVisionConfig
class transformers.GitVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 224
patch_size = 16
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
This is the configuration class to store the configuration of a GitVisionModel. It is used to instantiate a GIT
vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the vision encoder of the GIT
microsoft/git-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GitVisionConfig, GitVisionModel
# Initializing a GitVisionConfig with microsoft/git-base style configuration
configuration = GitVisionConfig()
# Initializing a GitVisionModel (with random weights) from the microsoft/git-base style configuration
model = GitVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
GitVisionModel
class transformers.GitVisionModel
<
source
>
(
config: GitVisionConfig
)
Parameters
config (GitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The vision model from CLIP, used in GIT, without any head or projection on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.git.configuration_git.GitVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GitVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, GitVisionModel
processor = AutoProcessor.from_pretrained("microsoft/git-base")
model = GitVisionModel.from_pretrained("microsoft/git-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
GitConfig
class transformers.GitConfig
<
source
>
(
vision_config = None
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 6
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 1024
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
tie_word_embeddings = False
bos_token_id = 101
eos_token_id = 102
num_image_with_embedding = None
**kwargs
)
Parameters
vision_config (dict, optional) —
Dictionary of configuration options used to initialize GitVisionConfig.
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the GIT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling GitModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 6) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
num_image_with_embedding (int, optional) —
The number of temporal embeddings to add, in case the model is used for video captioning/VQA.
This is the configuration class to store the configuration of a GitModel. It is used to instantiate a GIT model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GIT
microsoft/git-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import GitConfig, GitModel
# Initializing a GIT microsoft/git-base style configuration
configuration = GitConfig()
# Initializing a model (with random weights) from the microsoft/git-base style configuration
model = GitModel(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:
Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,
GitProcessor
class transformers.GitProcessor
<
source
>
(
image_processor
tokenizer
)
Parameters
image_processor (AutoImageProcessor) —
The image processor is a required input.
tokenizer (AutoTokenizer) —
The tokenizer is a required input.
Constructs a GIT processor which wraps a CLIP image processor and a BERT tokenizer into a single processor.
GitProcessor offers all the functionalities of CLIPImageProcessor and BertTokenizerFast. See the
call() and decode() for more information.
__call__
<
source
>
(
text = None
images = None
return_tensors = None
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
images (PIL.Image.Image, np.ndarray, torch.Tensor, List[PIL.Image.Image], List[np.ndarray], List[torch.Tensor]) —
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
return_tensors (str or TensorType, optional) —
If set, will return tensors of a particular framework. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return NumPy np.ndarray objects.
'jax': Return JAX jnp.ndarray objects.
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model. Returned when text is not None.
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names and if text is not
None).
pixel_values — Pixel values to be fed to a model. Returned when images is not None.
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the text
and kwargs arguments to BertTokenizerFast’s call() if text is not None to encode
the text. To prepare the image(s), this method forwards the images and kwrags arguments to
CLIPImageProcessor’s call() if images is not None. Please refer to the doctsring
of the above two methods for more information.
GitModel
class transformers.GitModel
<
source
>
(
config
)
Parameters
config (GitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GIT Model transformer consisting of a CLIP image encoder and text decoder outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModel
import requests
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/git-base")
model = AutoModel.from_pretrained("microsoft/git-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "this is an image of two cats"
inputs = processor(text, images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
GitForCausalLM
class transformers.GitForCausalLM
<
source
>
(
config
)
Parameters
config (GitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GIT Model with a language modeling head on top for autoregressive language modeling.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GitForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Image captioning example:
Copied
from transformers import AutoProcessor, AutoModelForCausalLM
import requests
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
two cats sleeping on a pink blanket next to remotes.
Visual question answering (VQA) example:
Copied
from transformers import AutoProcessor, AutoModelForCausalLM
from huggingface_hub import hf_hub_download
from PIL import Image
processor = AutoProcessor.from_pretrained("microsoft/git-base-textvqa")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-textvqa")
file_path = hf_hub_download(repo_id="nielsr/textvqa-sample", filename="bus.png", repo_type="dataset")
image = Image.open(file_path).convert("RGB")
pixel_values = processor(images=image, return_tensors="pt").pixel_values
question = "what does the front of the bus say at the top?"
input_ids = processor(text=question, add_special_tokens=False).input_ids
input_ids = [processor.tokenizer.cls_token_id] + input_ids
input_ids = torch.tensor(input_ids).unsqueeze(0)
generated_ids = model.generate(pixel_values=pixel_values, input_ids=input_ids, max_length=50)
print(processor.batch_decode(generated_ids, skip_special_tokens=True))
['what does the front of the bus say at the top? special']
Video captioning example:
Copied
import av
import numpy as np
from PIL import Image
from huggingface_hub import hf_hub_download
from transformers import AutoProcessor, AutoModelForCausalLM
processor = AutoProcessor.from_pretrained("microsoft/git-base-vatex")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-vatex")
# set seed for reproducability
np.random.seed(45)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# load video
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample frames
num_frames = model.config.num_image_with_embedding
indices = sample_frame_indices(
... clip_len=num_frames, frame_sample_rate=4, seg_len=container.streams.video[0].frames
... )
frames = read_video_pyav(container, indices)
pixel_values = processor(images=list(frames), return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
print("Generated caption:", processor.batch_decode(generated_ids, skip_special_tokens=True))
Generated caption: ['a woman is sitting at a table and she is talking about the food she is holding.']
←FLAVA
GroupViT→
GIT
Overview
Resources
GitVisionConfig
GitVisionModel
GitConfig
GitProcessor
GitModel
GitForCausalLM
|
BERTweet
Overview
The BERTweet model was proposed in BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al.,
2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks:
Part-of-speech tagging, Named-entity recognition and text classification.
Example of use:
Copied
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
# For transformers v4.x+:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
# For transformers v3.x:
# tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
... features = bertweet(input_ids) # Models outputs are now tuples
# With TensorFlow 2.0+:
# from transformers import TFAutoModel
# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
This model was contributed by dqnguyen. The original code can be found here.
BertweetTokenizer
class transformers.BertweetTokenizer
<
source
>
(
vocab_file
merges_file
normalization = False
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
normalization (bool, optional, defaults to False) —
Whether or not to apply a normalization preprocess.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Constructs a BERTweet tokenizer, using Byte-Pair-Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
add_from_file
<
source
>
(
f
)
Loads a pre-existing dictionary from a text file and adds its symbols to this instance.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERTweet sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BERTweet does
not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
normalizeToken
<
source
>
(
token
)
Normalize tokens in a Tweet
normalizeTweet
<
source
>
(
tweet
)
Normalize a raw Tweet
←BertJapanese
BigBird→
BERTweet
Overview
BertweetTokenizer
|
XLM-RoBERTa
Overview
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume
Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook’s
RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl
data.
The abstract from the paper is the following:
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a
wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly
outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on
XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on
low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We
also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the
trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource
languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing
per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
will make XLM-R code, data, and models publicly available.
Tips:
XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does
not require lang tensors to understand which language is used, and should be able to determine the correct
language from the input ids.
Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language.
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples
as well as the information relative to the inputs and outputs.
This model was contributed by stefan-it. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on how to finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS
XLMRobertaForSequenceClassification is supported by this example script and notebook.
TFXLMRobertaForSequenceClassification is supported by this example script and notebook.
FlaxXLMRobertaForSequenceClassification is supported by this example script and notebook.
Text classification chapter of the 🤗 Hugging Face Task Guides.
Text classification task guide
Token Classification
XLMRobertaForTokenClassification is supported by this example script and notebook.
TFXLMRobertaForTokenClassification is supported by this example script and notebook.
FlaxXLMRobertaForTokenClassification is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide
Text Generation
XLMRobertaForCausalLM is supported by this example script and notebook.
Causal language modeling chapter of the 🤗 Hugging Face Task Guides.
Causal language modeling task guide
Fill-Mask
XLMRobertaForMaskedLM is supported by this example script and notebook.
TFXLMRobertaForMaskedLM is supported by this example script and notebook.
FlaxXLMRobertaForMaskedLM is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling
Question Answering
XLMRobertaForQuestionAnswering is supported by this example script and notebook.
TFXLMRobertaForQuestionAnswering is supported by this example script and notebook.
FlaxXLMRobertaForQuestionAnswering is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice
XLMRobertaForMultipleChoice is supported by this example script and notebook.
TFXLMRobertaForMultipleChoice is supported by this example script and notebook.
Multiple choice task guide
🚀 Deploy
A blog post on how to Deploy Serverless XLM RoBERTa on AWS Lambda.
XLMRobertaConfig
class transformers.XLMRobertaConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the XLM-RoBERTa model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling XLMRobertaModel or TFXLMRobertaModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling XLMRobertaModel or
TFXLMRobertaModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a XLMRobertaModel or a TFXLMRobertaModel. It
is used to instantiate a XLM-RoBERTa model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the XLMRoBERTa
xlm-roberta-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import XLMRobertaConfig, XLMRobertaModel
# Initializing a XLM-RoBERTa xlm-roberta-base style configuration
configuration = XLMRobertaConfig()
# Initializing a model (with random weights) from the xlm-roberta-base style configuration
model = XLMRobertaModel(configuration)
# Accessing the model configuration
configuration = model.config
XLMRobertaTokenizer
class transformers.XLMRobertaTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
XLMRobertaTokenizerFast
class transformers.XLMRobertaTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
Construct a “fast” XLM-RoBERTa tokenizer (backed by HuggingFace’s tokenizers library). Adapted from
RobertaTokenizer and XLNetTokenizer. Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
XLMRobertaModel
class transformers.XLMRobertaModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLM-RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The XLMRobertaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaModel
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = XLMRobertaModel.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XLMRobertaForCausalLM
class transformers.XLMRobertaForCausalLM
<
source
>
(
config
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The XLMRobertaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
config = AutoConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = XLMRobertaForCausalLM.from_pretrained("roberta-base", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
XLMRobertaForMaskedLM
class transformers.XLMRobertaForMaskedLM
<
source
>
(
config
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.1
XLMRobertaForSequenceClassification
class transformers.XLMRobertaForSequenceClassification
<
source
>
(
config
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLMRobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = XLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'optimism'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.08
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLMRobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = XLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLMRobertaForSequenceClassification.from_pretrained(
... "cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
XLMRobertaForMultipleChoice
class transformers.XLMRobertaForMultipleChoice
<
source
>
(
config
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = XLMRobertaForMultipleChoice.from_pretrained("xlm-roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
XLMRobertaForTokenClassification
class transformers.XLMRobertaForTokenClassification
<
source
>
(
config
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = XLMRobertaForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.01
XLMRobertaForQuestionAnswering
class transformers.XLMRobertaForQuestionAnswering
<
source
>
(
config
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
model = XLMRobertaForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
' puppet'
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
0.86
TFXLMRobertaModel
class transformers.TFXLMRobertaModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFXLMRobertaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = TFXLMRobertaModel.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFXLMRobertaForCausalLM
class transformers.TFXLMRobertaForCausalLM
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa Model with a language modeling head on top for CLM fine-tuning.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The TFXLMRobertaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = TFXLMRobertaForCausalLM.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFXLMRobertaForMaskedLM
class transformers.TFXLMRobertaForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM RoBERTa Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMRobertaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = TFXLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-<mask> tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.1
TFXLMRobertaForSequenceClassification
class transformers.TFXLMRobertaForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMRobertaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = TFXLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'optimism'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFXLMRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
0.08
TFXLMRobertaForMultipleChoice
class transformers.TFXLMRobertaForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMRobertaForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = TFXLMRobertaForMultipleChoice.from_pretrained("xlm-roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFXLMRobertaForTokenClassification
class transformers.TFXLMRobertaForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMRobertaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-large-ner-english")
model = TFXLMRobertaForTokenClassification.from_pretrained("ydshieh/roberta-large-ner-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_tokens_classes
['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
round(float(loss), 2)
0.01
TFXLMRobertaForQuestionAnswering
class transformers.TFXLMRobertaForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input
IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMRobertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMRobertaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMRobertaForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-base-squad2")
model = TFXLMRobertaForQuestionAnswering.from_pretrained("ydshieh/roberta-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
' puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
0.86
FlaxXLMRobertaModel
class transformers.FlaxXLMRobertaModel
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaModel
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaModel.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxXLMRobertaForCausalLM
class transformers.FlaxXLMRobertaForCausalLM
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Roberta Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for
autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaForCausalLM.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
FlaxXLMRobertaForMaskedLM
class transformers.FlaxXLMRobertaForMaskedLM
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM RoBERTa Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxXLMRobertaForSequenceClassification
class transformers.FlaxXLMRobertaForSequenceClassification
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Roberta Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxXLMRobertaForMultipleChoice
class transformers.FlaxXLMRobertaForMultipleChoice
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaForMultipleChoice.from_pretrained("xlm-roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxXLMRobertaForTokenClassification
class transformers.FlaxXLMRobertaForTokenClassification
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaForTokenClassification.from_pretrained("xlm-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxXLMRobertaForQuestionAnswering
class transformers.FlaxXLMRobertaForQuestionAnswering
<
source
>
(
config: XLMRobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (XLMRobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxXLMRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXLMRobertaForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
model = FlaxXLMRobertaForQuestionAnswering.from_pretrained("xlm-roberta-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←XLM-ProphetNet
XLM-RoBERTa-XL→
XLM-RoBERTa
Overview
Resources
XLMRobertaConfig
XLMRobertaTokenizer
XLMRobertaTokenizerFast
XLMRobertaModel
XLMRobertaForCausalLM
XLMRobertaForMaskedLM
XLMRobertaForSequenceClassification
XLMRobertaForMultipleChoice
XLMRobertaForTokenClassification
XLMRobertaForQuestionAnswering
TFXLMRobertaModel
TFXLMRobertaForCausalLM
TFXLMRobertaForMaskedLM
TFXLMRobertaForSequenceClassification
TFXLMRobertaForMultipleChoice
TFXLMRobertaForTokenClassification
TFXLMRobertaForQuestionAnswering
FlaxXLMRobertaModel
FlaxXLMRobertaForCausalLM
FlaxXLMRobertaForMaskedLM
FlaxXLMRobertaForSequenceClassification
FlaxXLMRobertaForMultipleChoice
FlaxXLMRobertaForTokenClassification
FlaxXLMRobertaForQuestionAnswering
|
UL2
Overview
The T5 model was presented in Unifying Language Learning Paradigms by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler.
The abstract from the paper is the following:
Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives — two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.
Tips:
UL2 is an encoder-decoder model pre-trained on a mixture of denoising functions as well as fine-tuned on an array of downstream tasks.
UL2 has the same architecture as T5v1.1 but uses the Gated-SiLU activation function instead of Gated-GELU.
The authors release checkpoints of one architecture which can be seen here
The original code can be found here.
This model was contributed by DanielHesslow.
←Transformer XL
UMT5→
UL2
Overview
|
ALBERT
Overview
The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma,
Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training
speed of BERT:
Splitting the embedding matrix into two smaller matrices.
Using repeating layers split among groups.
The abstract from the paper is the following:
Increasing model size when pretraining natural language representations often results in improved performance on
downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations,
longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction
techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows
that our proposed methods lead to models that scale much better compared to the original BERT. We also use a
self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks
with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and
SQuAD benchmarks while having fewer parameters compared to BERT-large.
Tips:
ALBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather
than the left.
ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains
similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same
number of (repeating) layers.
Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it’s more logical to have H >> E. Also, the embedding matrix is large since it’s V x E (V being the vocab size). If E < H, it has less parameters.
Layers are split in groups that share parameters (to save memory).
Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not.
This model was contributed by lysandre. This model jax version was contributed by
kamalkraj. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
AlbertConfig
class transformers.AlbertConfig
<
source
>
(
vocab_size = 30000
embedding_size = 128
hidden_size = 4096
num_hidden_layers = 12
num_hidden_groups = 1
num_attention_heads = 64
intermediate_size = 16384
inner_group_num = 1
hidden_act = 'gelu_new'
hidden_dropout_prob = 0
attention_probs_dropout_prob = 0
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
classifier_dropout_prob = 0.1
position_embedding_type = 'absolute'
pad_token_id = 0
bos_token_id = 2
eos_token_id = 3
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30000) —
Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling AlbertModel or TFAlbertModel.
embedding_size (int, optional, defaults to 128) —
Dimensionality of vocabulary embeddings.
hidden_size (int, optional, defaults to 4096) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_hidden_groups (int, optional, defaults to 1) —
Number of groups for the hidden layers, parameters in the same group are shared.
num_attention_heads (int, optional, defaults to 64) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 16384) —
The dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
inner_group_num (int, optional, defaults to 1) —
The number of inner repetition of attention and ffn.
hidden_act (str or Callable, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
(e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling AlbertModel or TFAlbertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
classifier_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for attached classifiers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
This is the configuration class to store the configuration of a AlbertModel or a TFAlbertModel. It is used
to instantiate an ALBERT model according to the specified arguments, defining the model architecture. Instantiating
a configuration with the defaults will yield a similar configuration to that of the ALBERT
albert-xxlarge-v2 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import AlbertConfig, AlbertModel
# Initializing an ALBERT-xxlarge style configuration
albert_xxlarge_configuration = AlbertConfig()
# Initializing an ALBERT-base style configuration
albert_base_configuration = AlbertConfig(
... hidden_size=768,
... num_attention_heads=12,
... intermediate_size=3072,
... )
# Initializing a model (with random weights) from the ALBERT-base style configuration
model = AlbertModel(albert_xxlarge_configuration)
# Accessing the model configuration
configuration = model.config
AlbertTokenizer
class transformers.AlbertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
remove_space = True
keep_accents = False
bos_token = '[CLS]'
eos_token = '[SEP]'
unk_token = '<unk>'
sep_token = '[SEP]'
pad_token = '<pad>'
cls_token = '[CLS]'
mask_token = '[MASK]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) —
Whether or not to keep accents when tokenizing.
bos_token (str, optional, defaults to "[CLS]") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "[SEP]") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct an ALBERT tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An ALBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
AlbertTokenizerFast
class transformers.AlbertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
remove_space = True
keep_accents = False
bos_token = '[CLS]'
eos_token = '[SEP]'
unk_token = '<unk>'
sep_token = '[SEP]'
pad_token = '<pad>'
cls_token = '[CLS]'
mask_token = '[MASK]'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) —
Whether or not to keep accents when tokenizing.
bos_token (str, optional, defaults to "[CLS]") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "[SEP]") —
The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token
that is used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Construct a “fast” ALBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram. This
tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An ALBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
Albert specific outputs
class transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
prediction_logits: FloatTensor = None
sop_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (torch.FloatTensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of AlbertForPreTraining.
class transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
<
source
>
(
loss: tf.Tensor = None
prediction_logits: tf.Tensor = None
sop_logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (tf.Tensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFAlbertForPreTraining.
AlbertModel
class transformers.AlbertModel
<
source
>
(
config: AlbertConfig
add_pooling_layer: bool = True
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, AlbertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AlbertModel.from_pretrained("albert-base-v2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
AlbertForPreTraining
class transformers.AlbertForPreTraining
<
source
>
(
config: AlbertConfig
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with two heads on top as done during the pretraining: a masked language modeling head and a
sentence order prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
sentence_order_label: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
sentence_order_label (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1]. 0 indicates original order (sequence A, then
sequence B), 1 indicates switched order (sequence B, then sequence A).
Returns
transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.albert.modeling_albert.AlbertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, AlbertForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AlbertForPreTraining.from_pretrained("albert-base-v2")
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
# Batch size 1
outputs = model(input_ids)
prediction_logits = outputs.prediction_logits
sop_logits = outputs.sop_logits
AlbertForMaskedLM
class transformers.AlbertForMaskedLM
<
source
>
(
config
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, AlbertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AlbertForMaskedLM.from_pretrained("albert-base-v2")
# add mask_token
inputs = tokenizer("The capital of [MASK] is Paris.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
'france'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.81
AlbertForSequenceClassification
class transformers.AlbertForSequenceClassification
<
source
>
(
config: AlbertConfig
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, AlbertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("textattack/albert-base-v2-imdb")
model = AlbertForSequenceClassification.from_pretrained("textattack/albert-base-v2-imdb")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'LABEL_1'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = AlbertForSequenceClassification.from_pretrained("textattack/albert-base-v2-imdb", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.12
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, AlbertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("textattack/albert-base-v2-imdb")
model = AlbertForSequenceClassification.from_pretrained("textattack/albert-base-v2-imdb", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = AlbertForSequenceClassification.from_pretrained(
... "textattack/albert-base-v2-imdb", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
AlbertForMultipleChoice
class transformers.AlbertForMultipleChoice
<
source
>
(
config: AlbertConfig
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (see
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, AlbertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AlbertForMultipleChoice.from_pretrained("albert-base-v2")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
AlbertForTokenClassification
class transformers.AlbertForTokenClassification
<
source
>
(
config: AlbertConfig
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, AlbertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = AlbertForTokenClassification.from_pretrained("albert-base-v2")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
AlbertForQuestionAnswering
class transformers.AlbertForQuestionAnswering
<
source
>
(
config: AlbertConfig
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AlbertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, AlbertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("twmkn9/albert-base-v2-squad2")
model = AlbertForQuestionAnswering.from_pretrained("twmkn9/albert-base-v2-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'a nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([12])
target_end_index = torch.tensor([13])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
7.36
TFAlbertModel
class transformers.TFAlbertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Albert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFAlbertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertModel.from_pretrained("albert-base-v2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFAlbertForPreTraining
class transformers.TFAlbertForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with two heads on top for pretraining: a masked language modeling head and a sentence order prediction (classification) head.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
sentence_order_label: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFAlbertForPreTraining
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertForPreTraining.from_pretrained("albert-base-v2")
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]
# Batch size 1
outputs = model(input_ids)
prediction_logits = outputs.prediction_logits
sop_logits = outputs.sop_logits
TFAlbertForMaskedLM
class transformers.TFAlbertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFAlbertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertForMaskedLM.from_pretrained("albert-base-v2")
# add mask_token
inputs = tokenizer(f"The capital of [MASK] is Paris.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where(inputs.input_ids == tokenizer.mask_token_id)[0][1]
predicted_token_id = tf.math.argmax(logits[0, mask_token_index], axis=-1)
tokenizer.decode(predicted_token_id)
'france'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.81
TFAlbertForSequenceClassification
class transformers.TFAlbertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFAlbertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("vumichien/albert-base-v2-imdb")
model = TFAlbertForSequenceClassification.from_pretrained("vumichien/albert-base-v2-imdb")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'LABEL_1'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFAlbertForSequenceClassification.from_pretrained("vumichien/albert-base-v2-imdb", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
0.12
TFAlbertForMultipleChoice
class transformers.TFAlbertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFAlbertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertForMultipleChoice.from_pretrained("albert-base-v2")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFAlbertForTokenClassification
class transformers.TFAlbertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFAlbertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = TFAlbertForTokenClassification.from_pretrained("albert-base-v2")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFAlbertForQuestionAnswering
class transformers.TFAlbertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (AlbertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFAlbertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFAlbertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("vumichien/albert-base-v2-squad2")
model = TFAlbertForQuestionAnswering.from_pretrained("vumichien/albert-base-v2-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
'a nice puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([12])
target_end_index = tf.constant([13])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
7.36
FlaxAlbertModel
class transformers.FlaxAlbertModel
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Albert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertModel
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertModel.from_pretrained("albert-base-v2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxAlbertForPreTraining
class transformers.FlaxAlbertForPreTraining
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Albert Model with two heads on top as done during the pretraining: a masked language modeling head and a
sentence order prediction (classification) head.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.albert.modeling_flax_albert.FlaxAlbertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.albert.modeling_flax_albert.FlaxAlbertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.albert.modeling_flax_albert.FlaxAlbertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
sop_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertForPreTraining
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertForPreTraining.from_pretrained("albert-base-v2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.sop_logits
FlaxAlbertForMaskedLM
class transformers.FlaxAlbertForMaskedLM
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Albert Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertForMaskedLM.from_pretrained("albert-base-v2")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxAlbertForSequenceClassification
class transformers.FlaxAlbertForSequenceClassification
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertForSequenceClassification.from_pretrained("albert-base-v2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxAlbertForMultipleChoice
class transformers.FlaxAlbertForMultipleChoice
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertForMultipleChoice.from_pretrained("albert-base-v2")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxAlbertForTokenClassification
class transformers.FlaxAlbertForTokenClassification
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertForTokenClassification.from_pretrained("albert-base-v2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxAlbertForQuestionAnswering
class transformers.FlaxAlbertForQuestionAnswering
<
source
>
(
config: AlbertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (AlbertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AlbertConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxAlbertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxAlbertForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
model = FlaxAlbertForQuestionAnswering.from_pretrained("albert-base-v2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←Image Processor
BART→
ALBERT
Overview
Documentation resources
AlbertConfig
AlbertTokenizer
AlbertTokenizerFast
Albert specific outputs
AlbertModel
AlbertForPreTraining
AlbertForMaskedLM
AlbertForSequenceClassification
AlbertForMultipleChoice
AlbertForTokenClassification
AlbertForQuestionAnswering
TFAlbertModel
TFAlbertForPreTraining
TFAlbertForMaskedLM
TFAlbertForSequenceClassification
TFAlbertForMultipleChoice
TFAlbertForTokenClassification
TFAlbertForQuestionAnswering
FlaxAlbertModel
FlaxAlbertForPreTraining
FlaxAlbertForMaskedLM
FlaxAlbertForSequenceClassification
FlaxAlbertForMultipleChoice
FlaxAlbertForTokenClassification
FlaxAlbertForQuestionAnswering
|
Neighborhood Attention Transformer
Overview
NAT was proposed in Neighborhood Attention Transformer
by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern.
The abstract from the paper is the following:
We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision.
NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a
linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA’s
receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike
Swin Transformer’s Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package
with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin’s WSA while using up to 25% less
memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA
that boosts image classification and downstream vision performance. Experimental results on NAT are competitive;
NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9%
ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size.
Tips:
One can use the AutoImageProcessor API to prepare images for the model.
NAT can be used as a backbone. When output_hidden_states = True,
it will output both hidden_states and reshaped_hidden_states.
The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than
(batch_size, height, width, num_channels).
Notes:
NAT depends on NATTEN’s implementation of Neighborhood Attention.
You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten,
or build on your system by running pip install natten.
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
Patch size of 4 is only supported at the moment.
Neighborhood Attention compared to other attention patterns.
Taken from the original paper.
This model was contributed by Ali Hassani.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with NAT.
Image Classification
NatForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
NatConfig
class transformers.NatConfig
<
source
>
(
patch_size = 4
num_channels = 3
embed_dim = 64
depths = [3, 4, 6, 5]
num_heads = [2, 4, 8, 16]
kernel_size = 7
mlp_ratio = 3.0
qkv_bias = True
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
drop_path_rate = 0.1
hidden_act = 'gelu'
initializer_range = 0.02
layer_norm_eps = 1e-05
layer_scale_init_value = 0.0
out_features = None
out_indices = None
**kwargs
)
Parameters
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch. NOTE: Only patch size of 4 is supported at the moment.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 64) —
Dimensionality of patch embedding.
depths (List[int], optional, defaults to [2, 2, 6, 2]) —
Number of layers in each level of the encoder.
num_heads (List[int], optional, defaults to [3, 6, 12, 24]) —
Number of attention heads in each layer of the Transformer encoder.
kernel_size (int, optional, defaults to 7) —
Neighborhood Attention kernel size.
mlp_ratio (float, optional, defaults to 3.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (bool, optional, defaults to True) —
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
layer_scale_init_value (float, optional, defaults to 0.0) —
The initial value for the layer scale. Disabled if <=0.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a NatModel. It is used to instantiate a Nat model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Nat
shi-labs/nat-mini-in1k-224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import NatConfig, NatModel
# Initializing a Nat shi-labs/nat-mini-in1k-224 style configuration
configuration = NatConfig()
# Initializing a model (with random weights) from the shi-labs/nat-mini-in1k-224 style configuration
model = NatModel(configuration)
# Accessing the model configuration
configuration = model.config
NatModel
class transformers.NatModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (NatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Nat Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.nat.modeling_nat.NatModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.nat.modeling_nat.NatModelOutput or tuple(torch.FloatTensor)
A transformers.models.nat.modeling_nat.NatModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NatConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The NatModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, NatModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("shi-labs/nat-mini-in1k-224")
model = NatModel.from_pretrained("shi-labs/nat-mini-in1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 7, 7, 512]
NatForImageClassification
class transformers.NatForImageClassification
<
source
>
(
config
)
Parameters
config (NatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nat Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.nat.modeling_nat.NatImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.nat.modeling_nat.NatImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.nat.modeling_nat.NatImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NatConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The NatForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, NatForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("shi-labs/nat-mini-in1k-224")
model = NatForImageClassification.from_pretrained("shi-labs/nat-mini-in1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tiger cat
←MobileViTV2
PoolFormer→
Neighborhood Attention Transformer
Overview
Resources
NatConfig
NatModel
NatForImageClassification
|
ESM
Overview
This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental
AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v.
Transformer protein language models were introduced in the paper [Biological structure and function emerge from scaling
unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by
Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott,
C. Lawrence Zitnick, Jerry Ma, and Rob Fergus.
The first version of this paper was [preprinted in 2019](https://www.biorxiv.org/content/10.1101/622803v1?versioned=true).
ESM-2 outperforms all tested single-sequence protein language models across a range of structure prediction tasks,
and enables atomic resolution structure prediction.
It was released with the paper Language models of protein sequences at the scale of evolution enable accurate
structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie,
Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido and Alexander Rives.
Also introduced in this paper was ESMFold. It uses an ESM-2 stem with a head that can predict folded protein
structures with state-of-the-art accuracy. Unlike AlphaFold2,
it relies on the token embeddings from the large pre-trained protein language model stem and does not perform a multiple
sequence alignment (MSA) step at inference time, which means that ESMFold checkpoints are fully “standalone” -
they do not require a database of known protein sequences and structures with associated external query tools
to make predictions, and are much faster as a result.
The abstract from
“Biological structure and function emerge from scaling unsupervised learning to 250
million protein sequences” is
In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised
learning has led to major advances in representation learning and statistical generation. In the life sciences, the
anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling
at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To
this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250
million protein sequences spanning evolutionary diversity. The resulting model contains information about biological
properties in its representations. The representations are learned from sequence data alone. The learned representation
space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to
remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and
can be identified by linear projections. Representation learning produces features that generalize across a range of
applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and
improving state-of-the-art features for long-range contact prediction.
The abstract from
“Language models of protein sequences at the scale of evolution enable accurate structure prediction” is
Large language models have recently been shown to develop emergent capabilities with scale, going beyond
simple pattern matching to perform higher level reasoning and generate lifelike images and text. While
language models trained on protein sequences have been studied at a smaller scale, little is known about
what they learn about biology as they are scaled up. In this work we train models up to 15 billion parameters,
the largest language models of proteins to be evaluated to date. We find that as models are scaled they learn
information enabling the prediction of the three-dimensional structure of a protein at the resolution of
individual atoms. We present ESMFold for high accuracy end-to-end atomic level structure prediction directly
from the individual sequence of a protein. ESMFold has similar accuracy to AlphaFold2 and RoseTTAFold for
sequences with low perplexity that are well understood by the language model. ESMFold inference is an
order of magnitude faster than AlphaFold2, enabling exploration of the structural space of metagenomic
proteins in practical timescales.
Tips:
ESM models are trained with a masked language modeling (MLM) objective.
The original code can be found here and was
was developed by the Fundamental AI Research team at Meta AI.
ESM-1b, ESM-1v and ESM-2 were contributed to huggingface by jasonliu
and Matt.
ESMFold was contributed to huggingface by Matt and
Sylvain, with a big thank you to Nikita Smetanin, Roshan Rao and Tom Sercu for their
help throughout the process!
The HuggingFace port of ESMFold uses portions of the openfold library.
The openfold library is licensed under the Apache License 2.0.
Documentation resources
Text classification task guide
Token classification task guide
Masked language modeling task guide
EsmConfig
class transformers.EsmConfig
<
source
>
(
vocab_size = None
mask_token_id = None
pad_token_id = None
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 1026
initializer_range = 0.02
layer_norm_eps = 1e-12
position_embedding_type = 'absolute'
use_cache = True
emb_layer_norm_before = None
token_dropout = False
is_folding_model = False
esmfold_config = None
vocab_list = None
**kwargs
)
Parameters
vocab_size (int, optional) —
Vocabulary size of the ESM model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling ESMModel.
mask_token_id (int, optional) —
The index of the mask token in the vocabulary. This must be included in the config because of the
“mask-dropout” scaling trick, which will scale the inputs depending on the number of masked tokens.
pad_token_id (int, optional) —
The index of the padding token in the vocabulary. This must be included in the config because certain parts
of the ESM code use this instead of the attention mask.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 1026) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query", "rotary".
For positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
emb_layer_norm_before (bool, optional) —
Whether to apply layer normalization after embeddings but before the main stem of the network.
token_dropout (bool, defaults to False) —
When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.
This is the configuration class to store the configuration of a ESMModel. It is used to instantiate a ESM model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the ESM
facebook/esm-1b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import EsmModel, EsmConfig
# Initializing a ESM facebook/esm-1b style configuration configuration = EsmConfig()
# Initializing a model from the configuration model = ESMModel(configuration)
# Accessing the model configuration configuration = model.config
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
EsmTokenizer
class transformers.EsmTokenizer
<
source
>
(
vocab_file
unk_token = '<unk>'
cls_token = '<cls>'
pad_token = '<pad>'
mask_token = '<mask>'
eos_token = '<eos>'
**kwargs
)
Constructs an ESM tokenizer.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List
token_ids_1: typing.Optional[typing.List] = None
already_has_special_tokens: bool = False
)
→
A list of integers in the range [0, 1]
Parameters
token_ids_0 (List[int]) —
List of ids of the first sequence.
token_ids_1 (List[int], optional) —
List of ids of the second sequence.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model or encode_plus methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory
filename_prefix
)
EsmModel
class transformers.EsmModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ESM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ((batch_size, sequence_length))) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ((batch_size, sequence_length)), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape ((batch_size, sequence_length)), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ((batch_size, sequence_length), hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EsmConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The EsmModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, EsmModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = EsmModel.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
EsmForMaskedLM
class transformers.EsmForMaskedLM
<
source
>
(
config
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESM Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EsmConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The EsmForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, EsmForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = EsmForMaskedLM.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
EsmForSequenceClassification
class transformers.EsmForSequenceClassification
<
source
>
(
config
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EsmConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The EsmForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, EsmForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = EsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = EsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, EsmForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = EsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = EsmForSequenceClassification.from_pretrained(
... "facebook/esm2_t6_8M_UR50D", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
EsmForTokenClassification
class transformers.EsmForTokenClassification
<
source
>
(
config
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EsmConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The EsmForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, EsmForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = EsmForTokenClassification.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
EsmForProteinFolding
class transformers.EsmForProteinFolding
<
source
>
(
config
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESMForProteinFolding is the HuggingFace port of the original ESMFold model. It consists of an ESM-2 “stem” followed
by a protein folding “head”, although unlike most other output heads, this “head” is similar in size and runtime to
the rest of the model combined! It outputs a dictionary containing predicted structural information about the input
protein(s).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: Tensor
attention_mask: Tensor = None
position_ids: typing.Optional[torch.Tensor] = None
masking_pattern: typing.Optional[torch.Tensor] = None
num_recycles: typing.Optional[int] = None
)
→
transformers.models.esm.modeling_esmfold.EsmForProteinFoldingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
masking_pattern (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Locations of tokens to mask during training as a form of regularization. Mask values selected in [0, 1].
num_recycles (int, optional, defaults to None) —
Number of times to recycle the input sequence. If None, defaults to config.num_recycles. “Recycling”
consists of passing the output of the folding trunk back in as input to the trunk. During training, the
number of recycles should vary with each batch, to ensure that the model learns to output valid predictions
after each recycle. During inference, num_recycles should be set to the highest value that the model was
trained with for maximum accuracy. Accordingly, when this value is set to None, config.max_recycles is
used.
Returns
transformers.models.esm.modeling_esmfold.EsmForProteinFoldingOutput or tuple(torch.FloatTensor)
A transformers.models.esm.modeling_esmfold.EsmForProteinFoldingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.esm.configuration_esm.EsmConfig'>) and inputs.
frames (torch.FloatTensor) — Output frames.
sidechain_frames (torch.FloatTensor) — Output sidechain frames.
unnormalized_angles (torch.FloatTensor) — Predicted unnormalized backbone and side chain torsion angles.
angles (torch.FloatTensor) — Predicted backbone and side chain torsion angles.
positions (torch.FloatTensor) — Predicted positions of the backbone and side chain atoms.
states (torch.FloatTensor) — Hidden states from the protein folding trunk.
s_s (torch.FloatTensor) — Per-residue embeddings derived by concatenating the hidden states of each layer of the ESM-2 LM stem.
s_z (torch.FloatTensor) — Pairwise residue embeddings.
distogram_logits (torch.FloatTensor) — Input logits to the distogram used to compute residue distances.
lm_logits (torch.FloatTensor) — Logits output by the ESM-2 protein language model stem.
aatype (torch.FloatTensor) — Input amino acids (AlphaFold2 indices).
atom14_atom_exists (torch.FloatTensor) — Whether each atom exists in the atom14 representation.
residx_atom14_to_atom37 (torch.FloatTensor) — Mapping between atoms in the atom14 and atom37 representations.
residx_atom37_to_atom14 (torch.FloatTensor) — Mapping between atoms in the atom37 and atom14 representations.
atom37_atom_exists (torch.FloatTensor) — Whether each atom exists in the atom37 representation.
residue_index (torch.FloatTensor) — The index of each residue in the protein chain. Unless internal padding tokens are used, this will just be
a sequence of integers from 0 to sequence_length.
lddt_head (torch.FloatTensor) — Raw outputs from the lddt head used to compute plddt.
plddt (torch.FloatTensor) — Per-residue confidence scores. Regions of low confidence may indicate areas where the model’s prediction is
uncertain, or where the protein structure is disordered.
ptm_logits (torch.FloatTensor) — Raw logits used for computing ptm.
ptm (torch.FloatTensor) — TM-score output representing the model’s high-level confidence in the overall structure.
aligned_confidence_probs (torch.FloatTensor) — Per-residue confidence scores for the aligned structure.
predicted_aligned_error (torch.FloatTensor) — Predicted error between the model’s prediction and the ground truth.
max_predicted_aligned_error (torch.FloatTensor) — Per-sample maximum predicted error.
The EsmForProteinFolding forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, EsmForProteinFolding
model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1")
tokenizer = AutoTokenizer.from_pretrained("facebook/esmfold_v1")
inputs = tokenizer(["MLKNVQVQLV"], return_tensors="pt", add_special_tokens=False) # A tiny random peptide
outputs = model(**inputs)
folded_positions = outputs.positions
TFEsmModel
class transformers.TFEsmModel
<
source
>
(
*args
**kwargs
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ESM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Keras Model subclass. Use it as a
regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EsmConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFEsmModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFEsmModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = TFEsmModel.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFEsmForMaskedLM
class transformers.TFEsmForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESM Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Keras Model subclass. Use it as a
regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EsmConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFEsmForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFEsmForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = TFEsmForMaskedLM.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-<mask> tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFEsmForSequenceClassification
class transformers.TFEsmForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Keras Model subclass. Use it as a
regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EsmConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFEsmForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFEsmForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = TFEsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFEsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFEsmForTokenClassification
class transformers.TFEsmForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (EsmConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ESM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Keras Model subclass. Use it as a
regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EsmConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFEsmForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFEsmForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D")
model = TFEsmForTokenClassification.from_pretrained("facebook/esm2_t6_8M_UR50D")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
←ErnieM
FLAN-T5→
ESM
Overview
Documentation resources
EsmConfig
EsmTokenizer
EsmModel
EsmForMaskedLM
EsmForSequenceClassification
EsmForTokenClassification
EsmForProteinFolding
TFEsmModel
TFEsmForMaskedLM
TFEsmForSequenceClassification
TFEsmForTokenClassification
|
with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference).
One can also check out the HuggingFace Spaces demo to play with GroupViT.
GroupViTConfig
class transformers.GroupViTConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 256
projection_intermediate_dim = 4096
logit_scale_init_value = 2.6592
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize GroupViTTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize GroupViTVisionConfig.
projection_dim (int, optional, defaults to 256) —
Dimentionality of text and vision projection layers.
projection_intermediate_dim (int, optional, defaults to 4096) —
Dimentionality of intermediate layer of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale parameter. Default is used as per the original GroupViT
implementation.
kwargs (optional) —
Dictionary of keyword arguments.
GroupViTConfig is the configuration class to store the configuration of a GroupViTModel. It is used to
instantiate a GroupViT model according to the specified arguments, defining the text model and vision model
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT
nvidia/groupvit-gcc-yfcc architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
from_text_vision_configs
<
source
>
(
text_config: GroupViTTextConfig
vision_config: GroupViTVisionConfig
**kwargs
)
→
GroupViTConfig
Returns
GroupViTConfig
An instance of a configuration object
Instantiate a GroupViTConfig (or a derived class) from groupvit text model configuration and groupvit
vision model configuration.
GroupViTTextConfig
class transformers.GroupViTTextConfig
<
source
>
(
vocab_size = 49408
hidden_size = 256
intermediate_size = 1024
num_hidden_layers = 12
num_attention_heads = 4
max_position_embeddings = 77
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
dropout = 0.0
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 49408) —
Vocabulary size of the GroupViT text model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling GroupViTModel.
hidden_size (int, optional, defaults to 256) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 1024) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 77) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
dropout (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (float, optional, defaults to 1.0) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a GroupViTTextModel. It is used to instantiate an
GroupViT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the GroupViT
nvidia/groupvit-gcc-yfcc architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GroupViTTextConfig, GroupViTTextModel
# Initializing a GroupViTTextModel with nvidia/groupvit-gcc-yfcc style configuration
configuration = GroupViTTextConfig()
model = GroupViTTextModel(configuration)
# Accessing the model configuration
configuration = model.config
GroupViTVisionConfig
class transformers.GroupViTVisionConfig
<
source
>
(
hidden_size = 384
intermediate_size = 1536
depths = [6, 3, 3]
num_hidden_layers = 12
num_group_tokens = [64, 8, 0]
num_output_groups = [64, 8, 8]
num_attention_heads = 6
image_size = 224
patch_size = 16
num_channels = 3
hidden_act = 'gelu'
layer_norm_eps = 1e-05
dropout = 0.0
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
assign_eps = 1.0
assign_mlp_ratio = [0.5, 4]
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 384) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 1536) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
depths (List[int], optional, defaults to [6, 3, 3]) —
The number of layers in each encoder block.
num_group_tokens (List[int], optional, defaults to [64, 8, 0]) —
The number of group tokens for each stage.
num_output_groups (List[int], optional, defaults to [64, 8, 8]) —
The number of output groups for each stage, 0 means no group.
num_attention_heads (int, optional, defaults to 6) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (float, optional, defaults to 1.0) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a GroupViTVisionModel. It is used to instantiate
an GroupViT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GroupViT
nvidia/groupvit-gcc-yfcc architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GroupViTVisionConfig, GroupViTVisionModel
# Initializing a GroupViTVisionModel with nvidia/groupvit-gcc-yfcc style configuration
configuration = GroupViTVisionConfig()
model = GroupViTVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
GroupViTModel
class transformers.GroupViTModel
<
source
>
(
config: GroupViTConfig
)
Parameters
config (GroupViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_segmentation: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput or tuple(torch.FloatTensor)
A transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
segmentation_logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
text_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of
GroupViTTextModel.
image_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
GroupViTVisionModel.
text_model_output (BaseModelOutputWithPooling) — The output of the GroupViTTextModel.
vision_model_output (BaseModelOutputWithPooling) — The output of the GroupViTVisionModel.
The GroupViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, GroupViTModel
model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of GroupViTTextModel.
The GroupViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import CLIPTokenizer, GroupViTModel
model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of GroupViTVisionModel.
The GroupViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, GroupViTModel
model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
GroupViTTextModel
class transformers.GroupViTTextModel
<
source
>
(
config: GroupViTTextConfig
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GroupViTTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import CLIPTokenizer, GroupViTTextModel
tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
model = GroupViTTextModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
GroupViTVisionModel
class transformers.GroupViTVisionModel
<
source
>
(
config: GroupViTVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GroupViTVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, GroupViTVisionModel
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
model = GroupViTVisionModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
TFGroupViTModel
class transformers.TFGroupViTModel
<
source
>
(
*args
**kwargs
)
Parameters
config (GroupViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all the
tensors in the first argument of the model call function: model(inputs).
If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
first positional argument :
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
call
<
source
>
(
input_ids: TFModelInputType | None = None
pixel_values: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
return_loss: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
output_segmentation: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput or tuple(tf.Tensor)
A transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTConfig'>) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image (tf.Tensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text (tf.Tensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
segmentation_logits (tf.Tensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
text_embeds (tf.Tensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of
TFGroupViTTextModel.
image_embeds (tf.Tensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
TFGroupViTVisionModel.
text_model_output (TFBaseModelOutputWithPooling) — The output of the TFGroupViTTextModel.
vision_model_output (TFBaseModelOutputWithPooling) — The output of the TFGroupViTVisionModel.
The TFGroupViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFGroupViTModel
import tensorflow as tf
model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = tf.math.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
text_features (tf.Tensor of shape (batch_size, output_dim)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
text_features (tf.Tensor of shape (batch_size, output_dim)
The text embeddings obtained by applying
the projection layer to the pooled output of TFGroupViTTextModel.
The TFGroupViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import CLIPTokenizer, TFGroupViTModel
model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: TFModelInputType | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
image_features (tf.Tensor of shape (batch_size, output_dim)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
image_features (tf.Tensor of shape (batch_size, output_dim)
The image embeddings obtained by applying
the projection layer to the pooled output of TFGroupViTVisionModel.
The TFGroupViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFGroupViTModel
model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="tf")
image_features = model.get_image_features(**inputs)
TFGroupViTTextModel
class transformers.TFGroupViTTextModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTTextConfig'>) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGroupViTTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import CLIPTokenizer, TFGroupViTTextModel
tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc")
model = TFGroupViTTextModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
TFGroupViTVisionModel
class transformers.TFGroupViTVisionModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
pixel_values: TFModelInputType | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTVisionConfig'>) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGroupViTVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFGroupViTVisionModel
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
model = TFGroupViTVisionModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
←GIT
InstructBLIP→
GroupViT
Overview
Resources
GroupViTConfig
GroupViTTextConfig
GroupViTVisionConfig
GroupViTModel
GroupViTTextModel
GroupViTVisionModel
TFGroupViTModel
TFGroupViTTextModel
TFGroupViTVisionModel
|
YOSO
Overview
The YOSO model was proposed in You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention
via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with
a single hash.
The abstract from the paper is the following:
Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is
the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically
on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling
attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear.
We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random
variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant).
This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of
LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence
length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark,
for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable
speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL
Tips:
The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times
in parallel on a GPU.
The kernels provide a fast_hash function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these
hash codes, the lsh_cumulation function approximates self-attention via LSH-based Bernoulli sampling.
To use the custom kernels, the user should set config.use_expectation = False. To ensure that the kernels are compiled successfully,
the user must install the correct version of PyTorch and cudatoolkit. By default, config.use_expectation = True, which uses YOSO-E and
does not require compiling CUDA kernels.
YOSO Attention Algorithm. Taken from the original paper.
This model was contributed by novice03. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
YosoConfig
class transformers.YosoConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 4096
type_vocab_size = 1
initializer_range = 0.02
layer_norm_eps = 1e-12
position_embedding_type = 'absolute'
use_expectation = True
hash_code_len = 9
num_hash = 64
conv_window = None
use_fast_hash = True
lsh_backward = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the YOSO model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling YosoModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling YosoModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query".
use_expectation (bool, optional, defaults to True) —
Whether or not to use YOSO Expectation. Overrides any effect of num_hash.
hash_code_len (int, optional, defaults to 9) —
The length of hashes generated by the hash functions.
num_hash (int, optional, defaults to 64) —
Number of hash functions used in YosoSelfAttention.
conv_window (int, optional) —
Kernel size of depth-wise convolution.
use_fast_hash (bool, optional, defaults to False) —
Whether or not to use custom cuda kernels which perform fast random projection via hadamard transform.
lsh_backward (bool, optional, defaults to True) —
Whether or not to perform backpropagation using Locality Sensitive Hashing.
This is the configuration class to store the configuration of a YosoModel. It is used to instantiate an YOSO
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the YOSO
uw-madison/yoso-4096 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import YosoConfig, YosoModel
# Initializing a YOSO uw-madison/yoso-4096 style configuration
configuration = YosoConfig()
# Initializing a model (with random weights) from the uw-madison/yoso-4096 style configuration
model = YosoModel(configuration)
# Accessing the model configuration
configuration = model.config
YosoModel
class transformers.YosoModel
<
source
>
(
config
)
Parameters
config (YosoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare YOSO Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YosoConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The YosoModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, YosoModel
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoModel.from_pretrained("uw-madison/yoso-4096")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
YosoForMaskedLM
class transformers.YosoForMaskedLM
<
source
>
(
config
)
Parameters
config (YosoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
YOSO Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YosoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The YosoForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, YosoForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoForMaskedLM.from_pretrained("uw-madison/yoso-4096")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
YosoForSequenceClassification
class transformers.YosoForSequenceClassification
<
source
>
(
config
)
Parameters
config (YosoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
YOSO Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YosoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The YosoForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, YosoForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoForSequenceClassification.from_pretrained("uw-madison/yoso-4096")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = YosoForSequenceClassification.from_pretrained("uw-madison/yoso-4096", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, YosoForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoForSequenceClassification.from_pretrained("uw-madison/yoso-4096", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = YosoForSequenceClassification.from_pretrained(
... "uw-madison/yoso-4096", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
YosoForMultipleChoice
class transformers.YosoForMultipleChoice
<
source
>
(
config
)
Parameters
config (YosoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
YOSO Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YosoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The YosoForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, YosoForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoForMultipleChoice.from_pretrained("uw-madison/yoso-4096")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
YosoForTokenClassification
class transformers.YosoForTokenClassification
<
source
>
(
config
)
Parameters
config (YosoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
YOSO Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YosoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The YosoForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, YosoForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoForTokenClassification.from_pretrained("uw-madison/yoso-4096")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
YosoForQuestionAnswering
class transformers.YosoForQuestionAnswering
<
source
>
(
config
)
Parameters
config (YosoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
YOSO Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YosoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The YosoForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, YosoForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/yoso-4096")
model = YosoForQuestionAnswering.from_pretrained("uw-madison/yoso-4096")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←XLNet
BEiT→
YOSO
Overview
Documentation resources
YosoConfig
YosoModel
YosoForMaskedLM
YosoForSequenceClassification
YosoForMultipleChoice
YosoForTokenClassification
YosoForQuestionAnswering
|
BioGPT
Overview
The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining
by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.
The abstract from the paper is the following:
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
Tips:
BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
This model was contributed by kamalkraj. The original code can be found here.
Documentation resources
Causal language modeling task guide
BioGptConfig
class transformers.BioGptConfig
<
source
>
(
vocab_size = 42384
hidden_size = 1024
num_hidden_layers = 24
num_attention_heads = 16
intermediate_size = 4096
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 1024
initializer_range = 0.02
layer_norm_eps = 1e-12
scale_embedding = True
use_cache = True
layerdrop = 0.0
activation_dropout = 0.0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 42384) —
Vocabulary size of the BioGPT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BioGptModel.
hidden_size (int, optional, defaults to 1024) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 4096) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
scale_embedding (bool, optional, defaults to True) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
layerdrop (float, optional, defaults to 0.0) —
Please refer to the paper about LayerDrop: https://arxiv.org/abs/1909.11556 for further details
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
pad_token_id (int, optional, defaults to 1) —
Padding token id.
bos_token_id (int, optional, defaults to 0) —
Beginning of stream token id.
eos_token_id (int, optional, defaults to 2) —
End of stream token id.
Example —
This is the configuration class to store the configuration of a BioGptModel. It is used to instantiate an
BioGPT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the BioGPT
microsoft/biogpt architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import BioGptModel, BioGptConfig
# Initializing a BioGPT microsoft/biogpt style configuration
configuration = BioGptConfig()
# Initializing a model from the microsoft/biogpt style configuration
model = BioGptModel(configuration)
# Accessing the model configuration
configuration = model.config
BioGptTokenizer
class transformers.BioGptTokenizer
<
source
>
(
vocab_file
merges_file
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
pad_token = '<pad>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Merges file.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
Construct an FAIRSEQ Transformer tokenizer. Moses tokenization followed by Byte-Pair Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
BioGptModel
class transformers.BioGptModel
<
source
>
(
config: BioGptConfig
)
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BioGPT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BioGptConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The BioGptModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BioGptModel
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptModel.from_pretrained("microsoft/biogpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BioGptForCausalLM
class transformers.BioGptForCausalLM
<
source
>
(
config
)
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BioGPT Model with a language modeling head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BioGptConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The BioGptForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BioGptForCausalLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
BioGptForTokenClassification
class transformers.BioGptForTokenClassification
<
source
>
(
config
)
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BioGPT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BioGptConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BioGptForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BioGptForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForTokenClassification.from_pretrained("microsoft/biogpt")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
BioGptForSequenceClassification
class transformers.BioGptForSequenceClassification
<
source
>
(
config: BioGptConfig
)
Parameters
config (~BioGptConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The BioGpt Model transformer with a sequence classification head on top (linear layer).
BioGptForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it is required to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BioGptConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BioGptForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, BioGptForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForSequenceClassification.from_pretrained("microsoft/biogpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BioGptForSequenceClassification.from_pretrained("microsoft/biogpt", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, BioGptForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForSequenceClassification.from_pretrained("microsoft/biogpt", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BioGptForSequenceClassification.from_pretrained(
... "microsoft/biogpt", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
←BigBirdPegasus
Blenderbot→
BioGPT
Overview
Documentation resources
BioGptConfig
BioGptTokenizer
BioGptModel
BioGptForCausalLM
BioGptForTokenClassification
BioGptForSequenceClassification
|
T5
Overview
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
The abstract from the paper is the following:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
NLP, we release our dataset, pre-trained models, and code.
Tips:
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
different prefix to the input corresponding to each task, e.g., for translation: translate English to German: …,
for summarization: summarize: ….
The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above).
Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens.
T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right.
See the training, inference and scripts sections below for all details regarding usage.
T5 comes in different sizes:
t5-small
t5-base
t5-large
t5-3b
t5-11b.
Based on the original T5 model, Google has released some follow-up works:
T5v1.1: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without
mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found here.
mT5: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to
the documentation of mT5 which can be found here.
byT5: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer
to the documentation of byT5 which can be found here.
UL2: UL2 is a T5 like model pretrained on various denoising objectives
Flan-T5: Flan is a pretraining methods that is based on prompting. The Flan-T5 are T5 models trained on the Flan collection of
datasets which include: taskmaster2, djaym7/wiki_dialog, deepmind/code_contests, lambada, gsm8k, aqua_rat, esnli, quasc and qed.
FLan-UL2 : the UL2 model finetuned using the “Flan” prompt tuning and dataset collection.
UMT5: UmT5 is a multilingual T5 model trained on an improved and refreshed mC4 multilingual corpus, 29 trillion characters across 107 language, using a new sampling method, UniMax. Refer to
the documentation of mT5 which can be found here.
All checkpoints can be found on the hub.
This model was contributed by thomwolf. The original code can be found here.
Training
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input
sequence is fed to the model using input_ids. The target sequence is shifted to the right, i.e., prepended by a
start-sequence token and fed to the decoder using the decoder_input_ids. In teacher-forcing style, the target
sequence is then appended by the EOS token and corresponds to the labels. The PAD token is hereby used as the
start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
One can use T5ForConditionalGeneration (or the Tensorflow/Flax variant), which includes the
language modeling head on top of the decoder.
Unsupervised denoising training
In this setup, spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and
the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. Each
sentinel token represents a unique mask token for this sentence and should start with <extra_id_0>,
<extra_id_1>, … up to <extra_id_99>. As a default, 100 sentinel tokens are available in
T5Tokenizer.
For instance, the sentence “The cute dog walks in the park” with the masks put on “cute dog” and “the” should be
processed as follows:
Copied
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
loss.item()
3.7837
If you’re interested in pre-training T5 on a new corpus, check out the run_t5_mlm_flax.py script in the Examples
directory.
Supervised training
In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping.
Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input
sequence “The house is wonderful.” and output sequence “Das Haus ist wunderbar.”, then they should be prepared for
the model as follows:
Copied
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
labels = tokenizer("Das Haus ist wunderbar.", return_tensors="pt").input_ids
# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
loss.item()
0.2542
As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the
input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded
target sequence). The model will automatically create the decoder_input_ids based on the labels, by
shifting them one position to the right and prepending the config.decoder_start_token_id, which for T5 is
equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with ‘translate
English to German: ’ before encoding it. This will help in improving the performance, as this task prefix was used
during T5’s pre-training.
However, the example above only shows a single training example. In practice, one trains deep learning models in
batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one
typically defines a max_source_length and max_target_length, which determine the maximum length of the
input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on
the task.
In addition, we must make sure that padding token id’s of the labels are not taken into account by the loss
function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the ignore_index
of the CrossEntropyLoss. In Flax, one can use the decoder_attention_mask to ignore padded tokens from
the loss (see the Flax summarization script for details). We also pass
attention_mask as additional input to the model, which makes sure that padding tokens of the inputs are
ignored. The code example below illustrates all of this.
Copied
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
# the following 2 hyperparameters are task-specific
max_source_length = 512
max_target_length = 128
# Suppose we have the following 2 training examples:
input_sequence_1 = "Welcome to NYC"
output_sequence_1 = "Bienvenue à NYC"
input_sequence_2 = "HuggingFace is a company"
output_sequence_2 = "HuggingFace est une entreprise"
# encode the inputs
task_prefix = "translate English to French: "
input_sequences = [input_sequence_1, input_sequence_2]
encoding = tokenizer(
... [task_prefix + sequence for sequence in input_sequences],
... padding="longest",
... max_length=max_source_length,
... truncation=True,
... return_tensors="pt",
... )
input_ids, attention_mask = encoding.input_ids, encoding.attention_mask
# encode the targets
target_encoding = tokenizer(
... [output_sequence_1, output_sequence_2],
... padding="longest",
... max_length=max_target_length,
... truncation=True,
... return_tensors="pt",
... )
labels = target_encoding.input_ids
# replace padding token id's of the labels by -100 so it's ignored by the loss
labels[labels == tokenizer.pad_token_id] = -100
# forward pass
loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
loss.item()
0.188
Additional training tips:
T5 models need a slightly higher learning rate than the default one set in the Trainer when using the AdamW
optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question
answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer.
According to this forum post, task prefixes matter when
(1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5’s
pre-training mixture (see Appendix D of the paper for the task prefixes
used).
If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of
pad_to_multiple_of to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding
batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is
encountered during training thus significantly slowing down the training. only padding up to the longest example in a
batch) leads to very slow training on TPU.
Inference
At inference time, it is recommended to use generate(). This
method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder
and auto-regressively generates the decoder output. Check out this blog post to know all the details about generating text with Transformers.
There’s also this blog post which explains how
generation works in general in encoder-decoder models.
Copied
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Das Haus ist wunderbar.
Note that T5 uses the pad_token_id as the decoder_start_token_id, so when doing generation without using
generate(), make sure you start it with the pad_token_id.
The example above only shows a single example. You can also do batched inference, like so:
Copied
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
task_prefix = "translate English to German: "
# use different length sentences to test batching
sentences = ["The house is wonderful.", "I like to work in NYC."]
inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True)
output_sequences = model.generate(
... input_ids=inputs["input_ids"],
... attention_mask=inputs["attention_mask"],
... do_sample=False, # disable sampling to test if batching affects output
... )
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.']
Because T5 has been trained with the span-mask denoising objective,
it can be used to predict the sentinel (masked-out) tokens during inference.
The predicted tokens will then be placed between the sentinel tokens.
Copied
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
sequence_ids = model.generate(input_ids)
sequences = tokenizer.batch_decode(sequence_ids)
sequences
['<pad><extra_id_0> park offers<extra_id_1> the<extra_id_2> park.</s>']
Performance
If you’d like a faster training and inference performance, install apex and then the model will automatically use apex.normalization.FusedRMSNorm instead of T5LayerNorm. The former uses an optimized fused kernel which is several times faster than the latter.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with T5. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A notebook for how to finetune T5 for classification and multiple choice.
A notebook for how to finetune T5 for sentiment span extraction. 🌎
Token Classification
A notebook for how to finetune T5 for named entity recognition. 🌎
Text Generation
A notebook for Finetuning CodeT5 for generating docstrings from Ruby code.
Summarization
A notebook to Finetune T5-base-dutch to perform Dutch abstractive summarization on a TPU.
A notebook for how to finetune T5 for summarization in PyTorch and track experiments with WandB. 🌎
A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker.
T5ForConditionalGeneration is supported by this example script and notebook.
TFT5ForConditionalGeneration is supported by this example script and notebook.
FlaxT5ForConditionalGeneration is supported by this example script.
Summarization chapter of the 🤗 Hugging Face course.
Summarization task guide
Fill-Mask
FlaxT5ForConditionalGeneration is supported by this example script for training T5 with a span-masked language model objective. The script also shows how to train a T5 tokenizer. FlaxT5ForConditionalGeneration is also supported by this notebook.
Translation
T5ForConditionalGeneration is supported by this example script and notebook.
TFT5ForConditionalGeneration is supported by this example script and notebook.
Translation task guide
Question Answering
A notebook on how to finetune T5 for question answering with TensorFlow 2. 🌎
A notebook on how to finetune T5 for question answering on a TPU.
🚀 Deploy
A blog post on how to deploy T5 11B for inference for less than $500.
T5Config
class transformers.T5Config
<
source
>
(
vocab_size = 32128
d_model = 512
d_kv = 64
d_ff = 2048
num_layers = 6
num_decoder_layers = None
num_heads = 8
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
dropout_rate = 0.1
layer_norm_epsilon = 1e-06
initializer_factor = 1.0
feed_forward_proj = 'relu'
is_encoder_decoder = True
use_cache = True
pad_token_id = 0
eos_token_id = 1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32128) —
Vocabulary size of the T5 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling T5Model or TFT5Model.
d_model (int, optional, defaults to 512) —
Size of the encoder layers and the pooler layer.
d_kv (int, optional, defaults to 64) —
Size of the key, query, value projections per attention head. The inner_dim of the projection layer will
be defined as num_heads * d_kv.
d_ff (int, optional, defaults to 2048) —
Size of the intermediate feed forward layer in each T5Block.
num_layers (int, optional, defaults to 6) —
Number of hidden layers in the Transformer encoder.
num_decoder_layers (int, optional) —
Number of hidden layers in the Transformer decoder. Will use the same value as num_layers if not set.
num_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (float, optional, defaults to 0.1) —
The ratio for all dropout layers.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
feed_forward_proj (string, optional, defaults to "relu") —
Type of feed forward layer to be used. Should be one of "relu" or "gated-gelu". T5v1.1 uses the
"gated-gelu" feed forward projection. Original T5 uses "relu".
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a T5Model or a TFT5Model. It is used to
instantiate a T5 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the T5
t5-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
T5Tokenizer
class transformers.T5Tokenizer
<
source
>
(
vocab_file
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
extra_ids = 100
additional_special_tokens = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
legacy = True
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
extra_ids (int, optional, defaults to 100) —
Add a number of extra ids added to the vocabulary for use as sentinels. These tokens are
accessible as “id{%d}>” where ”{%d}” is a number between 0 and extra_ids-1. These tokens can be
retrieved by calling get_sentinel_tokens method and token ids can be by calling get_sentinel_token_ids
method
additional_special_tokens (List[str], optional):
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
legacy (bool, optional, defaults to True) —
Whether or not the legacy behaviour of the tokenizer should be used. Legacy is before the merge of #24622
which includes fixes to properly handle tokens that appear after special tokens. A simple example:
legacy=True:
Construct a T5 tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
single sequence: X </s>
pair of sequences: A </s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
T5TokenizerFast
class transformers.T5TokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
extra_ids = 100
additional_special_tokens = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
extra_ids (int, optional, defaults to 100) —
Add a number of extra ids added to the vocabulary for use as sentinels. These tokens are accessible as
“id{%d}>” where ”{%d}” is a number between 0 and extra_ids-1. These tokens can be retrieved by
calling get_sentinel_tokens method and token ids can be by calling get_sentinel_token_ids method
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer.
Construct a “fast” T5 tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
single sequence: X </s>
pair of sequences: A </s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
T5Model
class transformers.T5Model
<
source
>
(
config: T5Config
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare T5 Model transformer outputting raw hidden-states without any specific head on top.
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (T5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The T5Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, T5Model
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = T5Model.from_pretrained("t5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
# This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
T5ForConditionalGeneration
class transformers.T5ForConditionalGeneration
<
source
>
(
config: T5Config
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
T5 Model with a language modeling head on top.
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (T5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The T5ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
# training
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
# inference
input_ids = tokenizer(
... "summarize: studies have shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# studies have shown that owning a dog is good for you.
T5EncoderModel
class transformers.T5EncoderModel
<
source
>
(
config: T5Config
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare T5 Model transformer outputting encoder’s raw hidden-states without any specific head on top.
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (T5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The T5EncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, T5EncoderModel
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = T5EncoderModel.from_pretrained("t5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model(input_ids=input_ids)
last_hidden_states = outputs.last_hidden_state
T5ForQuestionAnswering
class transformers.T5ForQuestionAnswering
<
source
>
(
config: T5Config
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
T5 Model with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers
on top of the hidden-states output to compute span start logits and span end logits).
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (T5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The T5ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TFT5Model
class transformers.TFT5Model
<
source
>
(
*args
**kwargs
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare T5 Model transformer outputting raw hidden-stateswithout any specific head on top.
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on the right or the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
To know more on how to prepare inputs for pretraining take a look at T5 Training.
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Provide for sequence to sequence training. T5 uses the pad_token_id as the starting token for
decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids
have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(tf.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(tf.Tensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (tf.Tensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (T5Config) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFT5Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFT5Model
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5Model.from_pretrained("t5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="tf"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="tf").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
# This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
TFT5ForConditionalGeneration
class transformers.TFT5ForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
T5 Model with a language modeling head on top.
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on the right or the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
To know more on how to prepare inputs for pretraining take a look at T5 Training.
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Provide for sequence to sequence training. T5 uses the pad_token_id as the starting token for
decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids
have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(tf.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(tf.Tensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (tf.Tensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (T5Config) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFT5ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5ForConditionalGeneration.from_pretrained("t5-small")
# training
inputs = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="tf").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="tf").input_ids
outputs = model(inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
# inference
inputs = tokenizer(
... "summarize: studies have shown that owning a dog is good for you", return_tensors="tf"
... ).input_ids # Batch size 1
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# studies have shown that owning a dog is good for you
TFT5EncoderModel
class transformers.TFT5EncoderModel
<
source
>
(
*args
**kwargs
)
Parameters
config (T5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare T5 Model transformer outputting encoder’s raw hidden-stateswithout any specific head on top.
The T5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
inputs (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on the right or the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
To know more on how to prepare inputs for pre-training take a look at T5 Training.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (T5Config) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFT5EncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFT5EncoderModel
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFT5EncoderModel.from_pretrained("t5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="tf"
... ).input_ids # Batch size 1
outputs = model(input_ids)
FlaxT5Model
class transformers.FlaxT5Model
<
source
>
(
config: T5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: Array = None
decoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(jnp.ndarray), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(jnp.ndarray)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (T5Config) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxT5PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxT5Model
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = FlaxT5Model.from_pretrained("t5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="np"
... ).input_ids
decoder_input_ids = tokenizer("Studies show that", return_tensors="np").input_ids
# preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
# This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.t5.configuration_t5.T5Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For training, decoder_input_ids should be provided.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.t5.configuration_t5.T5Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
import jax.numpy as jnp
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
FlaxT5ForConditionalGeneration
class transformers.FlaxT5ForConditionalGeneration
<
source
>
(
config: T5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: Array = None
decoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(jnp.ndarray), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(jnp.ndarray)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (T5Config) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxT5PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small")
ARTICLE_TO_SUMMARIZE = "summarize: My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], return_tensors="np")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"]).sequences
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.t5.configuration_t5.T5Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For training, decoder_input_ids should be provided.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.t5.configuration_t5.T5Config'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration
import jax.numpy as jnp
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small")
text = "summarize: My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
FlaxT5EncoderModel
class transformers.FlaxT5EncoderModel
<
source
>
(
config: T5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a T5 Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The FlaxT5EncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←SwitchTransformers
T5v1.1→
T5
Overview
Training
Inference
Performance
Resources
T5Config
T5Tokenizer
T5TokenizerFast
T5Model
T5ForConditionalGeneration
T5EncoderModel
T5ForQuestionAnswering
TFT5Model
TFT5ForConditionalGeneration
TFT5EncoderModel
FlaxT5Model
FlaxT5ForConditionalGeneration
FlaxT5EncoderModel
|
MEGA
Overview
The MEGA model was proposed in Mega: Moving Average Equipped Gated Attention by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism
stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA
while also having significantly fewer parameters. MEGA’s compute efficiency allows it to scale to very long sequences, making it an
attractive option for long-document NLP tasks.
The abstract from the paper is the following:
The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models.
Tips:
MEGA can perform quite well with relatively few parameters. See Appendix D in the MEGA paper for examples of architectural specs which perform well in various settings. If using MEGA as a decoder, be sure to set bidirectional=False to avoid errors with default bidirectional.
Mega-chunk is a variant of mega that reduces time and spaces complexity from quadratic to linear. Utilize chunking with MegaConfig.use_chunking and control chunk size with MegaConfig.chunk_size
This model was contributed by mnaylor.
The original code can be found here.
Implementation Notes:
The original implementation of MEGA had an inconsistent expectation of attention masks for padding and causal self-attention between the softmax attention and Laplace/squared ReLU method. This implementation addresses that inconsistency.
The original implementation did not include token type embeddings; this implementation adds support for these, with the option controlled by MegaConfig.add_token_type_embeddings
MegaConfig
class transformers.MegaConfig
<
source
>
(
vocab_size = 30522
hidden_size = 128
num_hidden_layers = 4
intermediate_size = 256
ema_projection_size = 16
bidirectional = True
shared_representation_size = 64
use_chunking = False
chunk_size = -1
truncation = None
normalize_before_mega = True
normalization_type = 'scalenorm'
norm_affine = True
activation = 'silu'
attention_activation = 'softmax'
dropout_prob = 0.1
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
use_feature_dropout = False
use_normalized_ffn = True
nffn_hidden_size = 256
normalize_before_ffn = True
nffn_activation_dropout_prob = 0.1
max_positions = 2048
add_token_type_embeddings = False
type_vocab_size = 2
initializer_range = 0.02
ema_delta_alpha_range = 0.2
ema_beta_range = 0.02
ema_gamma_omega_range = 1.0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
relative_positional_bias = 'rotary'
classifier_dropout = None
use_cache = True
add_lm_hidden_dense_layer = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Mega model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MegaModel.
hidden_size (int, optional, defaults to 128) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 4) —
Number of hidden layers in the Mega encoder.
intermediate_size (int, optional, defaults to 256) —
Dimensionality of the hidden size (self-attention value projection) within the Mega encoder
ema_projection_size (int, optional, defaults to 16) —
Dimensionality of the MegaMultiDimensionDampedEma
bidirectional (bool, optional, defaults to True) —
Whether the MegaMultiDimensionDampedEma used in Mega’s self-attention should work bidirectionally (True)
or unidirectionally (False). Bidirectional EMA is incompatible with causal decoding, so this should be
False if you intend to use the model as a decoder.
shared_representation_size (int, optional, defaults to 64) —
Dimensionality of the linear projection for shared representation of self-attention queries and keys
use_chunking (bool, optional, defaults to False) —
Whether to chunk inputs for linear self-attention complexity (described as Mega-chunk in the paper)
chunk_size (int, optional, defaults to -1) —
If use_chunking is set to True, determines the size of the chunks to apply to the input sequence. If
chunking is used, input sequences must be padded to a multiple of chunk_size
truncation (int, optional) —
If specified, the sequence length for which to truncate MegaMultiDimensionDampedEma
normalize_before_mega (bool, optional, defaults to True) —
Whether to normalize before (True) or after (False) passing through Mega encoder blocks
normalization_type (str, optional, defaults to "scalenorm") —
Type of normalization to use in Mega encoder blocks. Choose one of "scalenorm", "layernorm",
"rmsnorm", "batchnorm", or "syncbatchnorm" (GPU required for syncbatchnorm)
norm_affine (bool, optional, defaults to True) —
If True, applies a parameterized affine transformation to inputs during normalization
activation (str, optional, defaults to "silu") —
Activation function to apply within Mega encoder blocks. Choose one of "silu", "relu", "linear",
"gelu", or "gelu_accurate"
attention_activation (str, optional, defaults to "softmax") —
Activation function to apply for single-headed self-attention (a la Transformer). Choose one of
"softmax", "laplace", or "relu2"
dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for EMA self-attention
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
use_feature_dropout (bool, optional, defaults to False) —
Whether to use feature-based (True) or standard dropout (False)
use_normalized_ffn (bool, optional, defaults to True) —
Whether to use the normalized feed-forward sub-layer in Mega blocks (True) or pass Mega encoder output
as-is (False)
nffn_hidden_size (int, optional, defaults to 256) —
If using the normalized feed-forward network (NFFN) layer within Mega (use_normalized_ffn = True), this
is the hidden size of the NFFN
normalize_before_ffn (bool, optional, defaults to True) —
Whether to normalize before (True) or after (False) the feed-forward portion of NFFN
nffn_activation_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the NFFN component.
max_positions (int, optional, defaults to 2048) —
The maximum sequence length to use for positional representations. For "simple" relative positional bias,
this is a hard limit on input length; "rotary" relative positional bias will extrapolate to longer
sequences
add_token_type_embeddings (bool, optional, defaults to True) —
Whether to account for token types in embeddings. Left as optional to maintain compatibility with original
implementation while adding support for token types.
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling MegaModel. Only used if
add_token_type_embeddings = True
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
ema_delta_alpha_range (float, optional, defaults to 0.2) —
The standard deviation for initializing the delta (damping factor) and alpha (decay factor) parameters in
MegaMultiDimensionDampedEma.
ema_beta_range (float, optional, defaults to 0.02) —
The standard deviation for initializing the beta parameter (expansion matrix) in
MegaMultiDimensionDampedEma.
ema_gamma_omega_range (float, optional, defaults to 1.0) —
The standard deviation for initializing the gamma (projection matrix) and omega (residual weight)
parameters in MultiDimensionEMA.
relative_positional_bias (str, optional, defaults to "rotary") —
Type of relative positional encoding. Choose one of "rotary" or "simple". If "simple" is selected,
max_positions is used as a limit on input size, while "rotary" extrapolates beyond max_positions.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
add_lm_hidden_dense_layer (bool, optional, defaults to True) —
Whether to include a hidden layer for projection between encoder outputs and LM heads (True) or pass
hidden states directly to LM head (False). Remains optional for compatibility with original
implementation
This is the configuration class to store the configuration of a MegaModel. It is used to instantiate a Mega
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Mega
mnaylor/mega-base-wikitext architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import MegaConfig, MegaModel
# Initializing a Mega configuration
configuration = MegaConfig()
# Initializing a model (with random weights) from the configuration
model = MegaModel(configuration)
# Accessing the model configuration
configuration = model.config
MegaModel
class transformers.MegaModel
<
source
>
(
config: MegaConfig
add_pooling_layer = True
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MEGA Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added after self-attention, following the architecture described in Mega: Moving Average
Equipped Gated Attention_ by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig,
Jonathan May, and Luke Zettlemoyer
To behave as a decoder the model needs to be initialized with the is_decoder argument of the configuration set to
True and bidirectional set to False. To be used in a Seq2Seq model, the model needs to initialized with both
is_decoder=True and bidirectional=False argument as well as add_cross_attention set to True; an
encoder_hidden_states is then expected as an input to the forward pass.
.. _Mega: Moving Average Equipped Gated Attention: https://arxiv.org/abs/2209.10655
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The MegaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegaModel
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaModel.from_pretrained("mnaylor/mega-base-wikitext")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MegaForCausalLM
class transformers.MegaForCausalLM
<
source
>
(
config: MegaConfig
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MEGA Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The MegaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegaForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
config = AutoConfig.from_pretrained("mnaylor/mega-base-wikitext")
config.is_decoder = True
config.bidirectional = False
model = MegaForCausalLM.from_pretrained(
... "mnaylor/mega-base-wikitext", config=config, ignore_mismatched_sizes=True
... )
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
MegaForMaskedLM
class transformers.MegaForMaskedLM
<
source
>
(
config: MegaConfig
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MEGA Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegaForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaForMaskedLM.from_pretrained("mnaylor/mega-base-wikitext")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.1
MegaForSequenceClassification
class transformers.MegaForSequenceClassification
<
source
>
(
config
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MEGA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, MegaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaForSequenceClassification.from_pretrained("mnaylor/mega-base-wikitext")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MegaForSequenceClassification.from_pretrained("mnaylor/mega-base-wikitext", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, MegaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaForSequenceClassification.from_pretrained("mnaylor/mega-base-wikitext", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MegaForSequenceClassification.from_pretrained(
... "mnaylor/mega-base-wikitext", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
MegaForMultipleChoice
class transformers.MegaForMultipleChoice
<
source
>
(
config
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MEGA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegaForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegaForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaForMultipleChoice.from_pretrained("mnaylor/mega-base-wikitext")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
MegaForTokenClassification
class transformers.MegaForTokenClassification
<
source
>
(
config
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MEGA Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegaForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaForTokenClassification.from_pretrained("mnaylor/mega-base-wikitext")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
MegaForQuestionAnswering
class transformers.MegaForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MegaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MEGA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with add_token_type_embeddings parameter
set to True. All the value in this tensor should be always < config.type_vocab_size.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegaForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("mnaylor/mega-base-wikitext")
model = MegaForQuestionAnswering.from_pretrained("mnaylor/mega-base-wikitext")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←MBart and MBart-50
MegatronBERT→
MEGA
Overview
MegaConfig
MegaModel
MegaForCausalLM
MegaForMaskedLM
MegaForSequenceClassification
MegaForMultipleChoice
MegaForTokenClassification
MegaForQuestionAnswering
|
CANINE
Overview
The CANINE model was proposed in CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It’s
among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair
Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level.
Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient
downsampling strategy, before applying a deep Transformer encoder.
The abstract from the paper is the following:
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models
still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword
lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all
languages, and the use of any fixed vocabulary may limit a model’s ability to adapt. In this paper, we present CANINE,
a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a
pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias.
To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input
sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by
2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.
Tips:
CANINE uses no less than 3 Transformer encoders internally: 2 “shallow” encoders (which only consist of a single
layer) and 1 “deep” encoder (which is a regular BERT encoder). First, a “shallow” encoder is used to contextualize
the character embeddings, using local attention. Next, after downsampling, a “deep” encoder is applied. Finally,
after upsampling, a “shallow” encoder is used to create the final character embeddings. Details regarding up- and
downsampling can be found in the paper.
CANINE uses a max sequence length of 2048 characters by default. One can use CanineTokenizer
to prepare text for the model.
Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token
(which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of
tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The
details for this can be found in the paper.
Models:
google/canine-c: Pre-trained with autoregressive character loss,
12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB).
google/canine-s: Pre-trained with subword loss, 12-layer,
768-hidden, 12-heads, 121M parameters (size ~500 MB).
This model was contributed by nielsr. The original code can be found here.
Example
CANINE works on raw characters, so it can be used without a tokenizer:
Copied
from transformers import CanineModel
import torch
model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss
text = "hello world"
# use Python's built-in ord() function to turn each character into its unicode code point id
input_ids = torch.tensor([[ord(char) for char in text]])
outputs = model(input_ids) # forward pass
pooled_output = outputs.pooler_output
sequence_output = outputs.last_hidden_state
For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all
sequences to the same length):
Copied
from transformers import CanineTokenizer, CanineModel
model = CanineModel.from_pretrained("google/canine-c")
tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
outputs = model(**encoding) # forward pass
pooled_output = outputs.pooler_output
sequence_output = outputs.last_hidden_state
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Multiple choice task guide
CANINE specific outputs
class transformers.models.canine.modeling_canine.CanineModelOutputWithPooling
<
source
>
(
last_hidden_state: FloatTensor = None
pooler_output: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model (i.e. the output of the final
shallow Transformer encoder).
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) —
Hidden-state of the first token of the sequence (classification token) at the last layer of the deep
Transformer encoder, further processed by a Linear layer and a Tanh activation function. The Linear layer
weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the input to each encoder + one for the output of each layer of each
encoder) of shape (batch_size, sequence_length, hidden_size) and (batch_size, sequence_length // config.downsampling_rate, hidden_size). Hidden-states of the model at the output of each layer plus the
initial input to each Transformer encoder. The hidden states of the shallow encoders have length
sequence_length, but the hidden states of the deep encoder have length sequence_length //
config.downsampling_rate.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of the 3 Transformer encoders of shape (batch_size, num_heads, sequence_length, sequence_length) and (batch_size, num_heads, sequence_length // config.downsampling_rate, sequence_length // config.downsampling_rate). Attentions weights after the
attention softmax, used to compute the weighted average in the self-attention heads.
Output type of CanineModel. Based on BaseModelOutputWithPooling, but with slightly
different hidden_states and attentions, as these also include the hidden states and attentions of the shallow
Transformer encoders.
CanineConfig
class transformers.CanineConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 16384
type_vocab_size = 16
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
bos_token_id = 57344
eos_token_id = 57345
downsampling_rate = 4
upsampling_kernel_size = 4
num_hash_functions = 8
num_hash_buckets = 16384
local_transformer_stride = 128
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the deep Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoders.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoders.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoders, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 16384) —
The maximum sequence length that this model might ever be used with.
type_vocab_size (int, optional, defaults to 16) —
The vocabulary size of the token_type_ids passed when calling CanineModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
downsampling_rate (int, optional, defaults to 4) —
The rate at which to downsample the original character sequence length before applying the deep Transformer
encoder.
upsampling_kernel_size (int, optional, defaults to 4) —
The kernel size (i.e. the number of characters in each window) of the convolutional projection layer when
projecting back from hidden_size*2 to hidden_size.
num_hash_functions (int, optional, defaults to 8) —
The number of hash functions to use. Each hash function has its own embedding matrix.
num_hash_buckets (int, optional, defaults to 16384) —
The number of hash buckets to use.
local_transformer_stride (int, optional, defaults to 128) —
The stride of the local attention of the first shallow Transformer encoder. Defaults to 128 for good
TPU/XLA memory alignment.
This is the configuration class to store the configuration of a CanineModel. It is used to instantiate an
CANINE model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the CANINE
google/canine-s architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CanineConfig, CanineModel
# Initializing a CANINE google/canine-s style configuration
configuration = CanineConfig()
# Initializing a model (with random weights) from the google/canine-s style configuration
model = CanineModel(configuration)
# Accessing the model configuration
configuration = model.config
CanineTokenizer
class transformers.CanineTokenizer
<
source
>
(
bos_token = '\ue000'
eos_token = '\ue001'
sep_token = '\ue001'
cls_token = '\ue000'
pad_token = '\x00'
mask_token = '\ue003'
add_prefix_space = False
model_max_length = 2048
**kwargs
)
Parameters
model_max_length (int, optional, defaults to 2048) —
The maximum sentence length the model accepts.
Construct a CANINE tokenizer (i.e. a character splitter). It turns text into a sequence of characters, and then
converts each character into its Unicode code point.
CanineTokenizer inherits from PreTrainedTokenizer.
Refer to superclass PreTrainedTokenizer for usage examples and documentation concerning parameters.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A CANINE sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A CANINE
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
CanineModel
class transformers.CanineModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (CanineConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare CANINE Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.canine.modeling_canine.CanineModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.canine.modeling_canine.CanineModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.canine.modeling_canine.CanineModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CanineConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model (i.e. the output of the final
shallow Transformer encoder).
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Hidden-state of the first token of the sequence (classification token) at the last layer of the deep
Transformer encoder, further processed by a Linear layer and a Tanh activation function. The Linear layer
weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the input to each encoder + one for the output of each layer of each
encoder) of shape (batch_size, sequence_length, hidden_size) and (batch_size, sequence_length // config.downsampling_rate, hidden_size). Hidden-states of the model at the output of each layer plus the
initial input to each Transformer encoder. The hidden states of the shallow encoders have length
sequence_length, but the hidden states of the deep encoder have length sequence_length //
config.downsampling_rate.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of the 3 Transformer encoders of shape (batch_size, num_heads, sequence_length, sequence_length) and (batch_size, num_heads, sequence_length // config.downsampling_rate, sequence_length // config.downsampling_rate). Attentions weights after the
attention softmax, used to compute the weighted average in the self-attention heads.
The CanineModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CanineModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/canine-s")
model = CanineModel.from_pretrained("google/canine-s")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
CanineForSequenceClassification
class transformers.CanineForSequenceClassification
<
source
>
(
config
)
Parameters
config (CanineConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CANINE Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CanineConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CanineForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, CanineForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/canine-s")
model = CanineForSequenceClassification.from_pretrained("google/canine-s")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = CanineForSequenceClassification.from_pretrained("google/canine-s", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, CanineForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/canine-s")
model = CanineForSequenceClassification.from_pretrained("google/canine-s", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = CanineForSequenceClassification.from_pretrained(
... "google/canine-s", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
CanineForMultipleChoice
class transformers.CanineForMultipleChoice
<
source
>
(
config
)
Parameters
config (CanineConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CANINE Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CanineConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CanineForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CanineForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("google/canine-s")
model = CanineForMultipleChoice.from_pretrained("google/canine-s")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
CanineForTokenClassification
class transformers.CanineForTokenClassification
<
source
>
(
config
)
Parameters
config (CanineConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CANINE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CanineConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CanineForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CanineForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("google/canine-s")
model = CanineForTokenClassification.from_pretrained("google/canine-s")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
Copied
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
CanineForQuestionAnswering
class transformers.CanineForQuestionAnswering
<
source
>
(
config
)
Parameters
config (CanineConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CANINE Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CanineConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CanineForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CanineForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("Splend1dchan/canine-c-squad")
model = CanineForQuestionAnswering.from_pretrained("Splend1dchan/canine-c-squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
8.81
←CamemBERT
CodeGen→
CANINE
Overview
Example
Documentation resources
CANINE specific outputs
CanineConfig
CanineTokenizer
CanineModel
CanineForSequenceClassification
CanineForMultipleChoice
CanineForTokenClassification
CanineForQuestionAnswering
|
GPTSAN-japanese
Overview
The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama).
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM
in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can
fine-tune for translation or summarization.
Generation
The generate() method can be used to generate text using GPTSAN-Japanese model.
Copied
from transformers import AutoModel, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda()
x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt")
torch.manual_seed(0)
gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20)
tokenizer.decode(gen_tok[0])
'織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉'
GPTSAN Features
GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models.
The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text.
GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details.
Prefix-LM Model
GPTSAN has the structure of the model named Prefix-LM in the T5 paper. (The original GPTSAN repository calls it hybrid)
In GPTSAN, the Prefix part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length.
Arbitrary lengths can also be specified differently for each batch.
This length applies to the text entered in prefix_text for the tokenizer.
The tokenizer returns the mask of the Prefix part of Prefix-LM as token_type_ids.
The model treats the part where token_type_ids is 1 as a Prefix part, that is, the input can refer to both tokens before and after.
Tips:
Specifying the Prefix part is done with a mask passed to self-attention.
When token_type_ids=None or all zero, it is equivalent to regular causal mask
for example:
x_token = tokenizer(“アイウエ”)
input_ids: | SOT | SEG | ア | イ | ウ | エ |
token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 0 0 0 0 0 |
SEG | 1 1 0 0 0 0 |
ア | 1 1 1 0 0 0 |
イ | 1 1 1 1 0 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 1 |
x_token = tokenizer("", prefix_text=“アイウエ”)
input_ids: | SOT | ア | イ | ウ | エ | SEG |
token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
prefix_lm_mask:
SOT | 1 1 1 1 1 0 |
ア | 1 1 1 1 1 0 |
イ | 1 1 1 1 1 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 0 |
SEG | 1 1 1 1 1 1 |
x_token = tokenizer(“ウエ”, prefix_text=“アイ”)
input_ids: | SOT | ア | イ | SEG | ウ | エ |
token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 1 1 0 0 0 |
ア | 1 1 1 0 0 0 |
イ | 1 1 1 0 0 0 |
SEG | 1 1 1 1 0 0 |
ウ | 1 1 1 1 1 0 |
エ | 1 1 1 1 1 1 |
Spout Vector
A Spout Vector is a special vector for controlling text generation.
This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens.
In the pre-trained model published from Tanrei/GPTSAN-japanese, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention.
The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions.
GPTSanJapaneseConfig
class transformers.GPTSanJapaneseConfig
<
source
>
(
vocab_size = 36000
max_position_embeddings = 1280
d_model = 1024
d_ff = 8192
d_ext = 4096
d_spout = 128
num_switch_layers = 10
num_ext_layers = 0
num_heads = 16
num_experts = 16
expert_capacity = 128
dropout_rate = 0.0
layer_norm_epsilon = 1e-05
router_bias = False
router_jitter_noise = 0.0
router_dtype = 'float32'
router_ignore_padding_tokens = False
output_hidden_states = False
output_attentions = False
initializer_factor = 0.002
output_router_logits = False
use_cache = True
separator_token_id = 35998
pad_token_id = 35995
eos_token_id = 35999
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 36000) —
Vocabulary size of the GPTSANJapanese model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling GPTSanJapaneseModel.
max_position_embeddings (int, optional, defaults to 1280) —
The maximum sequence length that this model might ever be used with. Defaults set this to 1280.
d_model (int, optional, defaults to 1024) —
Size of the encoder layers and the pooler layer.
d_ff (int, optional, defaults to 8192) —
Size of the intermediate feed forward layer in each SwitchTransformersBlock.
d_ext (int, optional, defaults to 4096) —
Size of the intermediate feed forward layer in each Extra-layers.
d_spout (int, optional, defaults to 128) —
Size of the spout vector.
num_switch_layers (int, optional, defaults to 10) —
Number of layers in the Switch Transformer layer.
num_ext_layers (int, optional, defaults to 0) —
Number of layers in the Extra-layers.
num_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
num_experts (int, optional, defaults to 16) —
Number of experts for each SwitchTransformer layer.
expert_capacity (int, optional, defaults to 128) —
Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular
Transformer.
dropout_rate (float, optional, defaults to 0.0) —
The ratio for all dropout layers.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
router_bias (bool, optional, defaults to False) —
Whether to add a bias to the router.
router_jitter_noise (float, optional, defaults to 0.0) —
Amount of noise to add to the router. Set it to 0.0 during prediction or set small value (usually 1e-2)
during training.
router_dtype (str, optional, default to "float32") —
The dtype used for the routers. It is preferable to keep the dtype to "float32" as specified in the
selective precision discussion in the paper.
router_ignore_padding_tokens (bool, optional, defaults to False) —
Whether to ignore padding tokens when routing.
output_hidden_states (bool, optional, default to False) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional, defaults to False) —
Whether or not to return the attentions tensors of all attention layers.
initializer_factor (float, optional, defaults to 0.002) —
A factor for initializing all weight matrices.
output_router_logits (bool, optional, default to False) —
Whether or not to return the router logits of all experts.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
This is the configuration class to store the configuration of a GPTSanJapaneseModel. It is used to instantiate
a GPTSANJapanese model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPTSANJapanese
Tanrei/GPTSAN-japanese architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
GPTSanJapaneseTokenizer
class transformers.GPTSanJapaneseTokenizer
<
source
>
(
vocab_file
emoji_file
unk_token = '<|nottoken|>'
pad_token = '<|separator|>'
bos_token = '<|startoftext|>'
eos_token = '<|endoftext|>'
sep_token = '<|segmenter|>'
do_clean_text = False
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
emoji_file (str) —
File containing the emoji.
unk_token (str, optional, defaults to "<|nottoken|>") —
The token used for unknown charactor
pad_token (str, optional, defaults to "<|separator|>") —
The token used for padding
bos_token (str, optional, defaults to "<|startoftext|>"") —
The beginning of sequence token.
eos_token (str, optional, defaults to "<|endoftext|>") —
The end of sequence token.
sep_token (str, optional, defaults to "<|segmenter|>") —
A special token to separate token to prefix part and general input part.
do_clean_text (bool, optional, defaults to False) —
Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE.
This tokenizer is based on GPTNeoXJapaneseTokenizer and has the following modifications
Decoding byte0~byte255 tokens correctly
Added bagofword token handling
Return token_type_ids for Prefix-LM model
The bagofword token represents a repetition of the previous token and is converted to 3 consecutive tokens when
decoding In addition, the original Japanese special Sub-Word-Encoding has been released in this repository
(https://github.com/tanreinama/Japanese-BPEEncoder_V2). The token_type_ids is a mask indicating the prefix input
position of the Prefix-LM model. To specify a prefix position, specify a prefix input for prefix_text, or specify a
sentence of the prefix part and the part after it as a text pair of batch input.
Example:
Copied
from transformers import GPTSanJapaneseTokenizer
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
# You can confirm both 慶応 and 慶應 are encoded to 17750
tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]
[35993, 35998, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281]
# Both 慶応 and 慶應 are decoded to 慶応
tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"])
'吾輩は猫である🐯。実は慶応(慶応)大学出身'
Example for Prefix-LM:
Copied
from transformers import GPTSanJapaneseTokenizer
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["input_ids"]
[35993, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 35998, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281]
# Mask for Prefix-LM inputs
tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["token_type_ids"]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Example for batch encode:
Copied
from transformers import GPTSanJapaneseTokenizer
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["input_ids"]
[[35993, 8640, 25948, 35998, 30647, 35675, 35999, 35999], [35993, 10382, 9868, 35998, 30646, 9459, 30646, 35675]]
# Mask for Prefix-LM inputs
tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["token_type_ids"]
[[1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 0]]
# Mask for padding
tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["attention_mask"]
[[1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1]]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
The tokenizer returns token_type_ids as separators between the Prefix part and the rest.
token_type_ids is 1 for the Prefix part and 0 for the rest of the token.
Example:
Copied
from transformers import GPTSanJapaneseTokenizer
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
x_token = tokenizer("アイウエ")
# input_ids: | SOT | SEG | ア | イ | ウ | エ |
# token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
x_token = tokenizer("", prefix_text="アイウエ")
# input_ids: | SOT | ア | イ | ウ | エ | SEG |
# token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
x_token = tokenizer("ウエ", prefix_text="アイ")
# input_ids: | SOT | ア | イ | SEG | ウ | エ |
# token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
GPTSanJapaneseModel
class transformers.GPTSanJapaneseModel
<
source
>
(
config: GPTSanJapaneseConfig
)
Parameters
config (GPTSanJapaneseConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPTSAN-japanese Model transformer outputting raw hidden-states without any specific head on top.
The GPTSAN-japanese model was proposed in General-purpose Swich transformer
based Japanese language model
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.FloatTensor] = None
spout: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
head_mask: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = False
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = None
num_precontext: typing.Optional[torch.LongTensor] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. GPTSAN-japanese is a model that generates sentence
continuations or predicts tokens at mask positions. Special tokens required for inputs to the model are
automatically appended.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
An input that masks the Prefix part in the Prefix-LM input. Mask values selected in [0, 1]:
1 for tokens that are prefix input,
0 for tokens that are not-prefix input.
spout (torch.Tensor of shape (batch_size, config.d_spout)) —
This vector is transformed through an 8-layer FFN and can be used instead of past_key_values.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models.
num_precontext (torch.LongTensor of shape (batch_size,1)) —
length of hybrid input tokens in the input. Tokens up to this length refer to both front and back like
BERT, tokens after that refer only to front like GPT. see also:
https://github.com/tanreinama/GPTSAN/blob/main/report/model.md
The GPTSanJapaneseModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
GPTSanJapaneseForConditionalGeneration
class transformers.GPTSanJapaneseForConditionalGeneration
<
source
>
(
config: GPTSanJapaneseConfig
)
Parameters
config (GPTSanJapaneseConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPTSAN-japanese Model with a language modeling head.
The GPTSAN-japanese model was proposed in General-purpose Swich transformer
based Japanese language model
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.FloatTensor] = None
spout: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
head_mask: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = False
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. GPTSAN-japanese is a model that generates sentence
continuations or predicts tokens at mask positions. Special tokens required for inputs to the model are
automatically appended.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
An input that masks the Prefix part in the Prefix-LM input. Mask values selected in [0, 1]:
1 for tokens that are prefix input,
0 for tokens that are not-prefix input.
spout (torch.Tensor of shape (batch_size, config.d_spout)) —
This vector is transformed through an 8-layer FFN and can be used instead of past_key_values.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
The GPTSanJapaneseForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Text Generation with regular LM Model
Copied
from transformers import AutoModel, AutoTokenizer, trainer_utils
device = "cuda"
model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device)
tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
x_token = tokenizer("織田信長は、", return_tensors="pt")
trainer_utils.set_seed(30)
input_ids = x_token.input_ids.to(device)
gen_token = model.generate(input_ids, max_new_tokens=50)
tokenizer.decode(gen_token[0])
"織田信長は、政治・軍事の中枢まで掌握した政治家であり、日本史上類を見ない驚異的な軍事侵攻を続け..."
Text Generation with Prefix-LM Model
Copied
from transformers import AutoModel, AutoTokenizer, trainer_utils
device = "cuda"
model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device)
tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
x_token = tokenizer("", prefix_text="織田信長は、", return_tensors="pt")
trainer_utils.set_seed(30)
input_ids = x_token.input_ids.to(device)
token_type_ids = x_token.token_type_ids.to(device)
gen_token = model.generate(input_ids, token_type_ids=token_type_ids, max_new_tokens=50)
tokenizer.decode(gen_token[0])
"織田信長は、政治・外交で数々の戦果を上げるが、1568年からは、いわゆる本能寺の変で細川晴元に暗殺される..."
Simultaneously Text Generation And Masked Language Model
Copied
from transformers import AutoModel, AutoTokenizer, trainer_utils
device = "cuda"
model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device)
tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
masked_sentence = "武田信玄は、<|inputmask|>時代ファンならぜひ押さえ<|inputmask|>きたい名将の一人。"
x_token = tokenizer("", prefix_text=masked_sentence, return_tensors="pt")
trainer_utils.set_seed(30)
input_ids = x_token.input_ids.to(device)
token_type_ids = x_token.token_type_ids.to(device)
out_lm_token = model.generate(input_ids, token_type_ids=token_type_ids, max_new_tokens=50)
out_mlm_token = model(input_ids, token_type_ids=token_type_ids).logits.argmax(axis=-1)
tokenizer.decode(out_mlm_token[0])
"武田信玄は、戦国時代ファンならぜひ押さえておきたい名将の一人。"
tokenizer.decode(out_lm_token[0][input_ids.shape[1] :])
"武田氏の三代に渡った武田家のひとり\n甲斐市に住む、日本史上最大の戦国大名。..."
←GPTBigCode
GPTSw3→
GPTSAN-japanese
Overview
Generation
GPTSAN Features
Prefix-LM Model
Spout Vector
GPTSanJapaneseConfig
GPTSanJapaneseTokenizer
GPTSanJapaneseModel
GPTSanJapaneseForConditionalGeneration
|
SwitchTransformers
Overview
The SwitchTransformers model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer.
The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale.
During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations.
The abstract from the paper is the following:
In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model — with outrageous numbers of parameters — but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability — we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus” and achieve a 4x speedup over the T5-XXL model.
Tips:
SwitchTransformers uses the T5Tokenizer, which can be loaded directly from each model’s repository.
The released weights are pretrained on English Masked Language Modeling task, and should be finetuned.
This model was contributed by Younes Belkada and Arthur Zucker .
The original code can be found here.
Resources
Translation task guide
Summarization task guide
SwitchTransformersConfig
class transformers.SwitchTransformersConfig
<
source
>
(
vocab_size = 32128
d_model = 768
d_kv = 64
d_ff = 2048
expert_capacity = 64
num_layers = 12
num_sparse_encoder_layers = 3
num_decoder_layers = 12
num_sparse_decoder_layers = 3
num_heads = 12
num_experts = 8
router_bias = False
router_jitter_noise = 0.01
router_dtype = 'float32'
router_ignore_padding_tokens = False
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
dropout_rate = 0.1
layer_norm_epsilon = 1e-06
router_z_loss_coef = 0.001
router_aux_loss_coef = 0.001
initializer_factor = 1.0
feed_forward_proj = 'relu'
is_encoder_decoder = True
add_router_probs = False
use_cache = True
pad_token_id = 0
eos_token_id = 1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32128) —
Vocabulary size of the SwitchTransformers model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling SwitchTransformersModel.
d_model (int, optional, defaults to 512) —
Size of the encoder layers and the pooler layer.
d_kv (int, optional, defaults to 64) —
Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads.
d_ff (int, optional, defaults to 2048) —
Size of the intermediate feed forward layer in each SwitchTransformersBlock.
expert_capacity (int, optional, defaults to 64) —
Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular
Transformer.
num_layers (int, optional, defaults to 12) —
Number of dense hidden layers in the Transformer encoder layer.
num_sparse_encoder_layers (int, optional, defaults to 6) —
Number of sparse (MoE) dense hidden layers in the Transformer encoder layer.
num_decoder_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer decoder. Will use the same value as num_layers if not set.
num_sparse_decoder_layers (int, optional, defaults to 12) —
Number of sparse (MoE) dense hidden layers in the Transformer decoder layer.
num_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
num_experts (int, optional, defaults to 8) —
Number of experts for each SwitchTransformer layer.
router_type (str, optional, defaults to "tokens_masked") —
Router type - choose between "tokens_masked", “tokens_scatter”and“experts_masked”`.
router_bias (bool, optional, defaults to True) —
Whether to add a bias to the router.
router_jitter_noise (float, optional, defaults to 0.1) —
Amount of noise to add to the router.
router_dtype (str, optional, default to "float32") —
The dtype used for the routers. It is preferable to keep the dtype to "float32" as specified in the
selective precision discussion in the paper.
router_ignore_padding_tokens (bool, optional, defaults to False) —
Whether to ignore padding tokens when routing.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (float, optional, defaults to 0.1) —
The ratio for all dropout layers.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
router_z_loss_coef (float, optional, defaults to 0.001) —
The z loss factor for the total loss.
router_aux_loss_coef (float, optional, defaults to 0.001) —
The aux loss factor for the total loss.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
feed_forward_proj (string, optional, defaults to "relu") —
Type of feed forward layer to be used. Should be one of "relu" or "gated-gelu". SwitchTransformersv1.1
uses the "gated-gelu" feed forward projection. Original SwitchTransformers uses "relu".
add_router_probs (bool, optional, defaults to False) —
Whether to output router probabilities to compute router auxiliary loss.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a SwitchTransformersModel. It is used to
instantiate a SwitchTransformers model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the
SwitchTransformers google/switch-base-8 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
SwitchTransformersTop1Router
class transformers.SwitchTransformersTop1Router
<
source
>
(
config: SwitchTransformersConfig
)
Router using tokens choose top-1 experts assignment.
This router uses the same mechanism as in Switch Transformer (https://arxiv.org/abs/2101.03961) and V-MoE
(https://arxiv.org/abs/2106.05974): tokens choose their top experts. Items are sorted by router_probs and then
routed to their choice of expert until the expert’s expert_capacity is reached. There is no guarantee that each
token is processed by an expert, or that each expert receives at least one token.
_compute_router_probabilities
<
source
>
(
hidden_states: Tensor
)
→
router_probabilities (torch.Tensor)
Parameters
hidden_states (torch.Tensor) —
(batch_size, sequence_length, hidden_dim) from which router probabilities are computed.
Returns
router_probabilities (torch.Tensor)
Tensor of shape (batch_size, sequence_length, num_experts) corresponding to the probabilities for each
token and expert. Used for routing tokens to experts.
router_logits (torch.Tensor):
Logits tensor of shape (batch_size, sequence_length, num_experts) corresponding to raw router logits.
This is used later for computing router z-loss.
Computes router probabilities from input hidden states.
forward
<
source
>
(
hidden_states: Tensor
)
Parameters
hidden_states (torch.Tensor) —
[num_groups, tokens_per_group, hidden_dim] inputs to send to experts.
Generic forward function for every Router class. Each Router expects to have the same input hidden states
(hidden_states) corresponding to the hidden states for each token, the expert_capacity corresponding to the
number of tokens the Router will send to each expert, some Routers can send up to few tokens to each expert.
Each Router works as the following: it expects the hidden states for each token, gets the router_probs and
router_logits from the router_weights. This will assign for each token, the raw probability to be assigned
to an expert. Then each Router class will have to define its own _compute_routing_instructions.
SwitchTransformersSparseMLP
class transformers.SwitchTransformersSparseMLP
<
source
>
(
config: SwitchTransformersConfig
expert_class: Module = <class 'transformers.models.switch_transformers.modeling_switch_transformers.SwitchTransformersDenseActDense'>
)
Implementation of the Switch Transformers Sparse MLP module.
forward
<
source
>
(
hidden_states
)
Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following:
1- Gets the router_mask from the router. The shape of the mask is (batch_size, sequence_length, num_expert)
and corresponds to the argmax of the router_probs. The probabilities are needed in the computation of the
hidden states : they are broadcasted to the hidden states values (can be interpreted as a scaling factor).
2- Dispatch the tokens to its associated experts. We do a classic for loop over the experts and assign for each
expert the corresponding hidden states.
SwitchTransformersModel
class transformers.SwitchTransformersModel
<
source
>
(
config: SwitchTransformersConfig
)
Parameters
config (SwitchTransformersConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SWITCH_TRANSFORMERS Model transformer outputting raw hidden-states without any specific head on top.
The SWITCH_TRANSFORMERS model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with
Simple and Efficient Sparsity by William
Fedus, Barret
Zoph, and Noam
Shazeer. It’s an encoder-decoder T5-like model
with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqMoEModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position
embeddings so you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a SWITCH_TRANSFORMERS
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SWITCH_TRANSFORMERS uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at SWITCH_TRANSFORMERS
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_router_logits (bool, optional) —
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
should not be returned during inference.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqMoEModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqMoEModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwitchTransformersConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the encoder model, useful to compute the auxiliary loss and the z_loss for the sparse
modules.
The SwitchTransformersModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SwitchTransformersModel
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersModel.from_pretrained("google/switch-base-8")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for SwitchTransformersModel.
# This is not needed for torch's SwitchTransformersForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
SwitchTransformersForConditionalGeneration
class transformers.SwitchTransformersForConditionalGeneration
<
source
>
(
config: SwitchTransformersConfig
)
Parameters
config (SwitchTransformersConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SWITCH_TRANSFORMERS Model with a language modeling head on top.
The SWITCH_TRANSFORMERS model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with
Simple and Efficient Sparsity by William
Fedus, Barret
Zoph, and Noam
Shazeer. It’s an encoder-decoder T5-like model
with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = True
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqMoEOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position
embeddings so you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a SWITCH_TRANSFORMERS
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SWITCH_TRANSFORMERS uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at SWITCH_TRANSFORMERS
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_router_logits (bool, optional) —
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
should not be returned during inference.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.Seq2SeqMoEOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqMoEOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwitchTransformersConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the encoder model, useful to compute the auxiliary loss and z_loss for Mixture of Experts
models.
The SwitchTransformersForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-8")
# training
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
# inference
input_ids = tokenizer(
... "summarize: studies have shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model.generate(input_ids)
# . To, let’s say you have a dog. To summarize:
# Since the model has been trained on MLM, this will output gibberish
SwitchTransformersEncoderModel
class transformers.SwitchTransformersEncoderModel
<
source
>
(
config: SwitchTransformersConfig
)
Parameters
config (SwitchTransformersConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SWITCH_TRANSFORMERS Model transformer outputting encoder’s raw hidden-states without any specific head on top.
The SWITCH_TRANSFORMERS model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with
Simple and Efficient Sparsity by William
Fedus, Barret
Zoph, and Noam
Shazeer. It’s an encoder-decoder T5-like model
with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = True
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MoEModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position
embeddings so you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a SWITCH_TRANSFORMERS
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_router_logits (bool, optional) —
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
should not be returned during inference.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.MoEModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MoEModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwitchTransformersConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
router_probs (tuple(torch.FloatTensor), optional, returned when output_router_probs=True and config.add_router_probs=True is passed or when config.output_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Raw router probabilities that are computed by MoE routers, these terms are used to compute the auxiliary
loss and the z_loss for Mixture of Experts models.
The SwitchTransformersEncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SwitchTransformersEncoderModel
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-8")
model = SwitchTransformersEncoderModel.from_pretrained("google/switch-base-8")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model(input_ids=input_ids)
last_hidden_states = outputs.last_hidden_state
←SqueezeBERT
T5→
SwitchTransformers
Overview
Resources
SwitchTransformersConfig
SwitchTransformersTop1Router
SwitchTransformersSparseMLP
SwitchTransformersModel
SwitchTransformersForConditionalGeneration
SwitchTransformersEncoderModel
|
MRA
Overview
The MRA model was proposed in Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a preferred model for many tasks in natural langugage processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.
This model was contributed by novice03.
The original code can be found here.
MraConfig
class transformers.MraConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 1
initializer_range = 0.02
layer_norm_eps = 1e-05
position_embedding_type = 'absolute'
block_per_row = 4
approx_mode = 'full'
initial_prior_first_n_blocks = 0
initial_prior_diagonal_n_blocks = 0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the Mra model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MraModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 1) —
The vocabulary size of the token_type_ids passed when calling MraModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query".
block_per_row (int, optional, defaults to 4) —
Used to set the budget for the high resolution scale.
approx_mode (str, optional, defaults to "full") —
Controls whether both low and high resolution approximations are used. Set to "full" for both low and
high resolution and "sparse" for only low resolution.
initial_prior_first_n_blocks (int, optional, defaults to 0) —
The initial number of blocks for which high resolution is used.
initial_prior_diagonal_n_blocks (int, optional, defaults to 0) —
The number of diagonal blocks for which high resolution is used.
This is the configuration class to store the configuration of a MraModel. It is used to instantiate an MRA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Mra
uw-madison/mra-base-512-4 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MraConfig, MraModel
# Initializing a Mra uw-madison/mra-base-512-4 style configuration
configuration = MraConfig()
# Initializing a model (with random weights) from the uw-madison/mra-base-512-4 style configuration
model = MraModel(configuration)
# Accessing the model configuration
configuration = model.config
MraModel
class transformers.MraModel
<
source
>
(
config
)
Parameters
config (MraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MRA Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MraConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The MraModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MraModel
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraModel.from_pretrained("uw-madison/mra-base-512-4")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MraForMaskedLM
class transformers.MraForMaskedLM
<
source
>
(
config
)
Parameters
config (MraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MRA Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MraForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MraForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraForMaskedLM.from_pretrained("uw-madison/mra-base-512-4")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
MraForSequenceClassification
class transformers.MraForSequenceClassification
<
source
>
(
config
)
Parameters
config (MraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MRA Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MraForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, MraForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraForSequenceClassification.from_pretrained("uw-madison/mra-base-512-4")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MraForSequenceClassification.from_pretrained("uw-madison/mra-base-512-4", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, MraForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraForSequenceClassification.from_pretrained("uw-madison/mra-base-512-4", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MraForSequenceClassification.from_pretrained(
... "uw-madison/mra-base-512-4", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
MraForMultipleChoice
class transformers.MraForMultipleChoice
<
source
>
(
config
)
Parameters
config (MraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MRA Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MraForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MraForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraForMultipleChoice.from_pretrained("uw-madison/mra-base-512-4")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
MraForTokenClassification
class transformers.MraForTokenClassification
<
source
>
(
config
)
Parameters
config (MraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MRA Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MraForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MraForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraForTokenClassification.from_pretrained("uw-madison/mra-base-512-4")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
MraForQuestionAnswering
class transformers.MraForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MraForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MraForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/mra-base-512-4")
model = MraForQuestionAnswering.from_pretrained("uw-madison/mra-base-512-4")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←MPNet
MT5→
MRA
Overview
MraConfig
MraModel
MraForMaskedLM
MraForSequenceClassification
MraForMultipleChoice
MraForTokenClassification
MraForQuestionAnswering
|
XLSR-Wav2Vec2
Overview
The XLSR-Wav2Vec2 model was proposed in Unsupervised Cross-Lingual Representation Learning For Speech Recognition by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
Auli.
The abstract from the paper is the following:
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw
waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over
masked latent speech representations and jointly learns a quantization of the latents shared across languages. The
resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly
outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction
of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to
a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong
individual models. Analysis shows that the latent discrete speech representations are shared across languages with
increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing
XLSR-53, a large model pretrained in 53 languages.
Tips:
XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using Wav2Vec2CTCTokenizer.
XLSR-Wav2Vec2’s architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2’s documentation page.
The original code can be found here.
←XLS-R
ALIGN→
XLSR-Wav2Vec2
Overview
|
XGLM
Overview
The XGLM model was proposed in Few-shot Learning with Multilingual Language Models
by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal,
Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo,
Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
The abstract from the paper is the following:
Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language
tasks without fine-tuning. While these models are known to be able to jointly represent many different languages,
their training data is dominated by English, potentially limiting their cross-lingual generalization.
In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages,
and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters
sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size
in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings)
and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark,
our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the
official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails,
showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement
on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models
in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.
This model was contributed by Suraj. The original code can be found here.
Documentation resources
Causal language modeling task guide
XGLMConfig
class transformers.XGLMConfig
<
source
>
(
vocab_size = 256008
max_position_embeddings = 2048
d_model = 1024
ffn_dim = 4096
num_layers = 24
attention_heads = 16
activation_function = 'gelu'
dropout = 0.1
attention_dropout = 0.1
activation_dropout = 0.0
layerdrop = 0.0
init_std = 0.02
scale_embedding = True
use_cache = True
decoder_start_token_id = 2
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 256008) —
Vocabulary size of the XGLM model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling XGLMModel or FlaxXGLMModel.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
d_model (int, optional, defaults to 1024) —
Dimension of the layers and the pooler layer.
ffn_dim (int, optional, defaults to 4096) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
num_layers (int, optional, defaults to 24) —
Number of hidden layers Transformer decoder.
attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, dencoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_embedding (bool, optional, defaults to True) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a XGLMModel. It is used to instantiate an XGLM
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the XGLM
facebook/xglm-564M architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import XGLMModel, XGLMConfig
# Initializing a XGLM facebook/xglm-564M style configuration
configuration = XGLMConfig()
# Initializing a model from the facebook/xglm-564M style configuration
model = XGLMModel(configuration)
# Accessing the model configuration
configuration = model.config
XGLMTokenizer
class transformers.XGLMTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
XGLMTokenizerFast
class transformers.XGLMTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
Construct a “fast” XGLM tokenizer (backed by HuggingFace’s tokenizers library). Adapted from RobertaTokenizer
and XLNetTokenizer. Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
XGLMModel
class transformers.XGLMModel
<
source
>
(
config: XGLMConfig
embed_tokens: typing.Optional[torch.nn.modules.sparse.Embedding] = None
)
Parameters
config (XGLMConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
config — XGLMConfig
embed_tokens (nn.Embedding) — output embedding
The bare XGLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Transformer decoder consisting of config.num_layers layers. Each layer is a XGLMDecoderLayer
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (torch.LongTensor of shape (batch_size, encoder_sequence_length), optional) —
Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XGLMConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The XGLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XGLMModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
model = XGLMModel.from_pretrained("facebook/xglm-564M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XGLMForCausalLM
class transformers.XGLMForCausalLM
<
source
>
(
config
)
Parameters
config (XGLMConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (torch.LongTensor of shape (batch_size, encoder_sequence_length), optional) —
Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XGLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The XGLMForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, XGLMForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
model = XGLMForCausalLM.from_pretrained("facebook/xglm-564M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
TFXGLMModel
class transformers.TFXGLMModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XGLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
config — XGLMConfig
embed_tokens — [TFSharedEmbeddings]: output embedding
The bare XGLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
Transformer decoder consisting of config.num_layers layers. Each layer is a TFXGLMDecoderLayer
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs: Any
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
encoder_hidden_states (tf.Tensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, encoder_sequence_length), optional) —
Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (tf.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.num_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XGLMConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFXGLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXGLMModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
model = TFXGLMModel.from_pretrained("facebook/xglm-564M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFXGLMForCausalLM
class transformers.TFXGLMForCausalLM
<
source
>
(
*args
**kwargs
)
Parameters
config (XGLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs: Any
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
encoder_hidden_states (tf.Tensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, encoder_sequence_length), optional) —
Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (tf.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (num_layers, attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.num_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XGLMConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor): A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XGLMConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The TFXGLMForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXGLMForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
model = TFXGLMForCausalLM.from_pretrained("facebook/xglm-564M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
FlaxXGLMModel
class transformers.FlaxXGLMModel
<
source
>
(
config: XGLMConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (XGLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare XGLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
encoder_hidden_states: typing.Optional[jax.Array] = None
encoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XGLMConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The FlaxXGLMPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXGLMModel
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
model = FlaxXGLMModel.from_pretrained("facebook/xglm-564M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxXGLMForCausalLM
class transformers.FlaxXGLMForCausalLM
<
source
>
(
config: XGLMConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (XGLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The XGLM Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
encoder_hidden_states: typing.Optional[jax.Array] = None
encoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XGLMConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxXGLMPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxXGLMForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
model = FlaxXGLMForCausalLM.from_pretrained("facebook/xglm-564M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
←X-MOD
XLM→
XGLM
Overview
Documentation resources
XGLMConfig
XGLMTokenizer
XGLMTokenizerFast
XGLMModel
XGLMForCausalLM
TFXGLMModel
TFXGLMForCausalLM
FlaxXGLMModel
FlaxXGLMForCausalLM
|
VideoMAE
Overview
The VideoMAE model was proposed in VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
VideoMAE extends masked auto encoders (MAE) to video, claiming state-of-the-art performance on several video classification benchmarks.
The abstract from the paper is the following:
Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.
Tips:
One can use VideoMAEImageProcessor to prepare videos for the model. It will resize + normalize all frames of a video for you.
VideoMAEForPreTraining includes the decoder on top for self-supervised pre-training.
VideoMAE pre-training. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If
you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll
review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Video classification
A notebook that shows how
to fine-tune a VideoMAE model on a custom dataset.
Video classification task guide
A 🤗 Space showing how to perform inference with a video classification model.
VideoMAEConfig
class transformers.VideoMAEConfig
<
source
>
(
image_size = 224
patch_size = 16
num_channels = 3
num_frames = 16
tubelet_size = 2
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
qkv_bias = True
use_mean_pooling = True
decoder_num_attention_heads = 6
decoder_hidden_size = 384
decoder_num_hidden_layers = 4
decoder_intermediate_size = 1536
norm_pix_loss = True
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_frames (int, optional, defaults to 16) —
The number of frames in each video.
tubelet_size (int, optional, defaults to 2) —
The number of tubelets.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
use_mean_pooling (bool, optional, defaults to True) —
Whether to mean pool the final hidden states instead of using the final hidden state of the [CLS] token.
decoder_num_attention_heads (int, optional, defaults to 6) —
Number of attention heads for each attention layer in the decoder.
decoder_hidden_size (int, optional, defaults to 384) —
Dimensionality of the decoder.
decoder_num_hidden_layers (int, optional, defaults to 4) —
Number of hidden layers in the decoder.
decoder_intermediate_size (int, optional, defaults to 1536) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the decoder.
norm_pix_loss (bool, optional, defaults to True) —
Whether to normalize the target patch pixels.
This is the configuration class to store the configuration of a VideoMAEModel. It is used to instantiate a
VideoMAE model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the VideoMAE
MCG-NJU/videomae-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import VideoMAEConfig, VideoMAEModel
# Initializing a VideoMAE videomae-base style configuration
configuration = VideoMAEConfig()
# Randomly initializing a model from the configuration
model = VideoMAEModel(configuration)
# Accessing the model configuration
configuration = model.config
VideoMAEFeatureExtractor
class transformers.VideoMAEFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
VideoMAEImageProcessor
class transformers.VideoMAEImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the output image after resizing. The shortest edge of the image will be resized to
size["shortest_edge"] while maintaining the aspect ratio of the original image. Can be overriden by
size in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by the do_center_crop
parameter in the preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after applying the center crop. Can be overridden by the crop_size parameter in the
preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Defines the scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter
in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a VideoMAE image processor.
preprocess
<
source
>
(
videos: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after applying resize.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_centre_crop) —
Whether to centre crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after applying the centre crop.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the inferred channel dimension format of the input image.
Preprocess an image or batch of images.
VideoMAEModel
class transformers.VideoMAEModel
<
source
>
(
config
)
Parameters
config (VideoMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare VideoMAE Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
VideoMAEImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, sequence_length), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Each video in the
batch must have the same number of masked patches. If None, then all patches are considered. Sequence
length is (num_frames // tubelet_size) * (image_size // patch_size) ** 2.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VideoMAEConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VideoMAEModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import numpy as np
from transformers import AutoImageProcessor, VideoMAEModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 16 frames
indices = sample_frame_indices(clip_len=16, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base")
model = VideoMAEModel.from_pretrained("MCG-NJU/videomae-base")
# prepare video for the model
inputs = image_processor(list(video), return_tensors="pt")
# forward pass
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1568, 768]
VideoMAEForPreTraining
class transformers.VideoMAEForPreTraining
<
source
>
(
config
)
Parameters
config (VideoMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The VideoMAE Model transformer with the decoder on top for self-supervised pre-training.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
bool_masked_pos: BoolTensor
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.videomae.modeling_videomae.VideoMAEForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
VideoMAEImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, sequence_length)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Each video in the
batch must have the same number of masked patches. Sequence length is (num_frames // tubelet_size) * (image_size // patch_size) ** 2.
Returns
transformers.models.videomae.modeling_videomae.VideoMAEForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.videomae.modeling_videomae.VideoMAEForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VideoMAEConfig) and inputs.
loss (torch.FloatTensor of shape (1,)) — Pixel reconstruction loss.
logits (torch.FloatTensor of shape (batch_size, patch_size ** 2 * num_channels)) — Pixel reconstruction logits.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The VideoMAEForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, VideoMAEForPreTraining
import numpy as np
import torch
num_frames = 16
video = list(np.random.randint(0, 256, (num_frames, 3, 224, 224)))
image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base")
model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base")
pixel_values = image_processor(video, return_tensors="pt").pixel_values
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss = outputs.loss
VideoMAEForVideoClassification
class transformers.VideoMAEForVideoClassification
<
source
>
(
config
)
Parameters
config (VideoMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VideoMAE Model transformer with a video classification head on top (a linear layer on top of the average pooled hidden
states of all tokens) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
VideoMAEImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VideoMAEConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VideoMAEForVideoClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import torch
import numpy as np
from transformers import AutoImageProcessor, VideoMAEForVideoClassification
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 16 frames
indices = sample_frame_indices(clip_len=16, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics")
model = VideoMAEForVideoClassification.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics")
inputs = image_processor(list(video), return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
... logits = outputs.logits
# model predicts one of the 400 Kinetics-400 classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
eating spaghetti
←VAN
Vision Transformer (ViT)→
VideoMAE
Overview
Resources
VideoMAEConfig
VideoMAEFeatureExtractor
VideoMAEImageProcessor
VideoMAEModel
VideoMAEForPreTraining
VideoMAEForVideoClassification
|
LiLT
Overview
The LiLT model was proposed in LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding by Jiapeng Wang, Lianwen Jin, Kai Ding.
LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable LayoutLM-like document understanding for many
languages.
The abstract from the paper is the following:
Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.
Tips:
To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the hub, refer to this guide.
The script will result in config.json and pytorch_model.bin files being stored locally. After doing this, one can do the following (assuming you’re logged in with your HuggingFace account):
Copied
from transformers import LiltModel
model = LiltModel.from_pretrained("path_to_your_files")
model.push_to_hub("name_of_repo_on_the_hub")
When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer.
As lilt-roberta-en-base uses the same vocabulary as LayoutLMv3, one can use LayoutLMv3TokenizerFast to prepare data for the model.
The same is true for lilt-roberta-en-base: one can use LayoutXLMTokenizerFast for that model.
LiLT architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT.
Demo notebooks for LiLT can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LiltConfig
class transformers.LiltConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
classifier_dropout = None
channel_shrink_ratio = 4
max_2d_position_embeddings = 1024
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the LiLT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LiltModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer. Should be a multiple of 24.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling LiltModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
channel_shrink_ratio (int, optional, defaults to 4) —
The shrink ratio compared to the hidden_size for the channel dimension of the layout embeddings.
max_2d_position_embeddings (int, optional, defaults to 1024) —
The maximum value that the 2D position embedding might ever be used with. Typically set this to something
large just in case (e.g., 1024).
This is the configuration class to store the configuration of a LiltModel. It is used to instantiate a LiLT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LiLT
SCUT-DLVCLab/lilt-roberta-en-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import LiltConfig, LiltModel
# Initializing a LiLT SCUT-DLVCLab/lilt-roberta-en-base style configuration
configuration = LiltConfig()
# Randomly initializing a model from the SCUT-DLVCLab/lilt-roberta-en-base style configuration
model = LiltModel(configuration)
# Accessing the model configuration
configuration = model.config
LiltModel
class transformers.LiltModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (LiltConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LiLT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
bbox: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LiltConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LiltModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AutoModel
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
model = AutoModel.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
words = example["tokens"]
boxes = example["bboxes"]
encoding = tokenizer(words, boxes=boxes, return_tensors="pt")
outputs = model(**encoding)
last_hidden_states = outputs.last_hidden_state
LiltForSequenceClassification
class transformers.LiltForSequenceClassification
<
source
>
(
config
)
Parameters
config (LiltConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LiLT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LiltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LiltForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
model = AutoModelForSequenceClassification.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
words = example["tokens"]
boxes = example["bboxes"]
encoding = tokenizer(words, boxes=boxes, return_tensors="pt")
outputs = model(**encoding)
predicted_class_idx = outputs.logits.argmax(-1).item()
predicted_class = model.config.id2label[predicted_class_idx]
LiltForTokenClassification
class transformers.LiltForTokenClassification
<
source
>
(
config
)
Parameters
config (LiltConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Lilt Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LiltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LiltForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AutoModelForTokenClassification
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
model = AutoModelForTokenClassification.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
words = example["tokens"]
boxes = example["bboxes"]
encoding = tokenizer(words, boxes=boxes, return_tensors="pt")
outputs = model(**encoding)
predicted_class_indices = outputs.logits.argmax(-1)
LiltForQuestionAnswering
class transformers.LiltForQuestionAnswering
<
source
>
(
config
)
Parameters
config (LiltConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Lilt Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LiltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LiltForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
model = AutoModelForQuestionAnswering.from_pretrained("SCUT-DLVCLab/lilt-roberta-en-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
words = example["tokens"]
boxes = example["bboxes"]
encoding = tokenizer(words, boxes=boxes, return_tensors="pt")
outputs = model(**encoding)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = encoding.input_ids[0, answer_start_index : answer_end_index + 1]
predicted_answer = tokenizer.decode(predict_answer_tokens)
←LayoutXLM
LXMERT→
LiLT
Overview
Resources
LiltConfig
LiltModel
LiltForSequenceClassification
LiltForTokenClassification
LiltForQuestionAnswering
|
Swin Transformer
Overview
The Swin Transformer was proposed in Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
The abstract from the paper is the following:
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone
for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains,
such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text.
To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted
\bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping
local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at
various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it
compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense
prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation
(53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and
+2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.
The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.
Tips:
One can use the AutoImageProcessor API to prepare images for the model.
Swin pads the inputs supporting any input height and width (if divisible by 32).
Swin can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, sequence_length, num_channels).
Swin Transformer architecture. Taken from the original paper.
This model was contributed by novice03. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer.
Image Classification
SwinForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
SwinForMaskedImageModeling is supported by this example script.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SwinConfig
class transformers.SwinConfig
<
source
>
(
image_size = 224
patch_size = 4
num_channels = 3
embed_dim = 96
depths = [2, 2, 6, 2]
num_heads = [3, 6, 12, 24]
window_size = 7
mlp_ratio = 4.0
qkv_bias = True
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
drop_path_rate = 0.1
hidden_act = 'gelu'
use_absolute_embeddings = False
initializer_range = 0.02
layer_norm_eps = 1e-05
encoder_stride = 32
out_features = None
out_indices = None
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 96) —
Dimensionality of patch embedding.
depths (list(int), optional, defaults to [2, 2, 6, 2]) —
Depth of each layer in the Transformer encoder.
num_heads (list(int), optional, defaults to [3, 6, 12, 24]) —
Number of attention heads in each layer of the Transformer encoder.
window_size (int, optional, defaults to 7) —
Size of windows.
mlp_ratio (float, optional, defaults to 4.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (bool, optional, defaults to True) —
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
use_absolute_embeddings (bool, optional, defaults to False) —
Whether or not to add absolute position embeddings to the patch embeddings.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
encoder_stride (int, optional, defaults to 32) —
Factor to increase the spatial resolution by in the decoder head for masked image modeling.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a SwinModel. It is used to instantiate a Swin
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Swin
microsoft/swin-tiny-patch4-window7-224
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SwinConfig, SwinModel
# Initializing a Swin microsoft/swin-tiny-patch4-window7-224 style configuration
configuration = SwinConfig()
# Initializing a model (with random weights) from the microsoft/swin-tiny-patch4-window7-224 style configuration
model = SwinModel(configuration)
# Accessing the model configuration
configuration = model.config
SwinModel
class transformers.SwinModel
<
source
>
(
config
add_pooling_layer = True
use_mask_token = False
)
Parameters
config (SwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Swin Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.swin.modeling_swin.SwinModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.swin.modeling_swin.SwinModelOutput or tuple(torch.FloatTensor)
A transformers.models.swin.modeling_swin.SwinModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwinConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The SwinModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, SwinModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = SwinModel.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 49, 768]
SwinForMaskedImageModeling
class transformers.SwinForMaskedImageModeling
<
source
>
(
config
)
Parameters
config (SwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swin Model with a decoder on top for masked image modeling, as proposed in SimMIM.
Note that we provide a script to pre-train this model on custom data in our examples
directory.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.swin.modeling_swin.SwinMaskedImageModelingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.swin.modeling_swin.SwinMaskedImageModelingOutput or tuple(torch.FloatTensor)
A transformers.models.swin.modeling_swin.SwinMaskedImageModelingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwinConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Masked image modeling (MLM) loss.
reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed pixel values.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The SwinForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, SwinForMaskedImageModeling
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-base-simmim-window6-192")
model = SwinForMaskedImageModeling.from_pretrained("microsoft/swin-base-simmim-window6-192")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
list(reconstructed_pixel_values.shape)
[1, 3, 192, 192]
SwinForImageClassification
class transformers.SwinForImageClassification
<
source
>
(
config
)
Parameters
config (SwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swin Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.swin.modeling_swin.SwinImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.swin.modeling_swin.SwinImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.swin.modeling_swin.SwinImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwinConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The SwinForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, SwinForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = SwinForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFSwinModel
class transformers.TFSwinModel
<
source
>
(
*args
**kwargs
)
Parameters
config (SwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Swin Model transformer outputting raw hidden-states without any specific head on top.
This model is a Tensorflow
tf.keras.layers.Layer sub-class. Use it as a
regular Tensorflow Module and refer to the Tensorflow documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
bool_masked_pos: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.swin.modeling_tf_swin.TFSwinModelOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (tf.Tensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.swin.modeling_tf_swin.TFSwinModelOutput or tuple(tf.Tensor)
A transformers.models.swin.modeling_tf_swin.TFSwinModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (SwinConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each stage) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each stage) of shape
(batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The TFSwinModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFSwinModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = TFSwinModel.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 49, 768]
TFSwinForMaskedImageModeling
class transformers.TFSwinForMaskedImageModeling
<
source
>
(
*args
**kwargs
)
Parameters
config (SwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swin Model with a decoder on top for masked image modeling, as proposed in SimMIM.
This model is a Tensorflow
tf.keras.layers.Layer sub-class. Use it as a
regular Tensorflow Module and refer to the Tensorflow documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
bool_masked_pos: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.swin.modeling_tf_swin.TFSwinMaskedImageModelingOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (tf.Tensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.swin.modeling_tf_swin.TFSwinMaskedImageModelingOutput or tuple(tf.Tensor)
A transformers.models.swin.modeling_tf_swin.TFSwinMaskedImageModelingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (SwinConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when bool_masked_pos is provided) — Masked image modeling (MLM) loss.
reconstruction (tf.Tensor of shape (batch_size, num_channels, height, width)) — Reconstructed pixel values.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each stage) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each stage) of shape
(batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The TFSwinForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFSwinForMaskedImageModeling
import tensorflow as tf
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = TFSwinForMaskedImageModeling.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="tf").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = tf.random.uniform((1, num_patches)) >= 0.5
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
list(reconstructed_pixel_values.shape)
[1, 3, 224, 224]
TFSwinForImageClassification
class transformers.TFSwinForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (SwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swin Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model is a Tensorflow
tf.keras.layers.Layer sub-class. Use it as a
regular Tensorflow Module and refer to the Tensorflow documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.swin.modeling_tf_swin.TFSwinImageClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.swin.modeling_tf_swin.TFSwinImageClassifierOutput or tuple(tf.Tensor)
A transformers.models.swin.modeling_tf_swin.TFSwinImageClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (SwinConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each stage) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each stage) of shape
(batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The TFSwinForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFSwinForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = TFSwinForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
tabby, tabby cat
←SwiftFormer
Swin Transformer V2→
Swin Transformer
Overview
Resources
SwinConfig
SwinModel
SwinForMaskedImageModeling
SwinForImageClassification
TFSwinModel
TFSwinForMaskedImageModeling
TFSwinForImageClassification
|
WavLM
Overview
The WavLM model was proposed in WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen,
Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu,
Michael Zeng, Furu Wei.
The abstract from the paper is the following:
Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been
attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker
identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is
challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks.
WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity
preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on
recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where
additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up
the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB
benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
Tips:
WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use
Wav2Vec2Processor for the feature extraction.
WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using Wav2Vec2CTCTokenizer.
WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm.
This model was contributed by patrickvonplaten. The Authors’ code can be
found here.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
WavLMConfig
class transformers.WavLMConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
num_buckets = 320
max_bucket_distance = 800
do_stable_layer_norm = False
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
num_codevectors_per_group = 320
num_codevector_groups = 2
contrastive_logits_temperature = 0.1
num_negatives = 100
codevector_dim = 256
proj_codevector_dim = 256
diversity_loss_weight = 0.1
ctc_loss_reduction = 'mean'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
tdnn_dim = (512, 512, 512, 512, 1500)
tdnn_kernel = (5, 3, 3, 1, 1)
tdnn_dilation = (1, 2, 3, 1, 1)
xvector_output_dim = 512
num_ctc_classes = 80
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
add_adapter = False
adapter_kernel_size = 3
adapter_stride = 2
num_adapter_layers = 3
output_hidden_size = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the WavLM model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling WavLMModel. Vocabulary size of the model. Defines the different tokens
that can be represented by the inputs_ids passed to the forward method of WavLMModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of WavLMForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (bool, optional, defaults to False) —
Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Propability of each feature vector along the time axis to be chosen as the start of the vector span to be
masked. Approximately mask_time_prob * sequence_length // mask_time_length feature vectors will be masked
along the time axis. This is only relevant if apply_spec_augment is True.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Propability of each feature vector along the feature axis to be chosen as the start of the vector span to
be masked. Approximately mask_time_prob * hidden_size // mask_time_length feature vectors will be masked
along the time axis. This is only relevant if apply_spec_augment is True.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
num_codevectors_per_group (int, optional, defaults to 320) —
Number of entries in each quantization codebook (group).
num_codevector_groups (int, optional, defaults to 2) —
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (float, optional, defaults to 0.1) —
The temperature kappa in the contrastive loss.
num_negatives (int, optional, defaults to 100) —
Number of negative samples for the contrastive loss.
codevector_dim (int, optional, defaults to 256) —
Dimensionality of the quantized feature vectors.
proj_codevector_dim (int, optional, defaults to 256) —
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (int, optional, defaults to 0.1) —
The weight of the codebook diversity loss component.
ctc_loss_reduction (str, optional, defaults to "mean") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of WavLMForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of WavLMForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of WavLMForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 1500)) —
A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN
module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.
tdnn_kernel (Tuple[int] or List[int], optional, defaults to (5, 3, 3, 1, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the
XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.
tdnn_dilation (Tuple[int] or List[int], optional, defaults to (1, 2, 3, 1, 1)) —
A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the
XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.
xvector_output_dim (int, optional, defaults to 512) —
Dimensionality of the XVector embedding vectors.
add_adapter (bool, optional, defaults to False) —
Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for
warm-starting Wav2Vec2 for SpeechEncoderDecoder models.
adapter_kernel_size (int, optional, defaults to 3) —
Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
adapter_stride (int, optional, defaults to 2) —
Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
num_adapter_layers (int, optional, defaults to 3) —
Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True.
output_hidden_size (int, optional) —
Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant
if add_adapter is True.
This is the configuration class to store the configuration of a WavLMModel. It is used to instantiate an WavLM
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the WavLM
microsoft/wavlm-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
Example:
Copied
from transformers import WavLMConfig, WavLMModel
# Initializing a WavLM facebook/wavlm-base-960h style configuration
configuration = WavLMConfig()
# Initializing a model (with random weights) from the facebook/wavlm-base-960h style configuration
model = WavLMModel(configuration)
# Accessing the model configuration
configuration = model.config
WavLMModel
class transformers.WavLMModel
<
source
>
(
config: WavLMConfig
)
Parameters
config (WavLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare WavLM Model transformer outputting raw hidden-states without any specific head on top.
WavLM was proposed in WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WavLMConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The WavLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, WavLMModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus")
model = WavLMModel.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 768]
WavLMForCTC
class transformers.WavLMForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (WavLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
WavLM Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
WavLM was proposed in WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WavLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The WavLMForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, WavLMForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus")
model = WavLMForCTC.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'mister quilter is the aposle of the middle classes and we are glad to welcome his gospel'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
12.51
WavLMForSequenceClassification
class transformers.WavLMForSequenceClassification
<
source
>
(
config
)
Parameters
config (WavLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
WavLM Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.
WavLM was proposed in WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WavLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The WavLMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, WavLMForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus")
model = WavLMForSequenceClassification.from_pretrained("patrickvonplaten/wavlm-libri-clean-100h-base-plus")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
WavLMForAudioFrameClassification
class transformers.WavLMForAudioFrameClassification
<
source
>
(
config
)
Parameters
config (WavLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
WavLM Model with a frame classification head on top for tasks like Speaker Diarization.
WavLM was proposed in WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WavLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The WavLMForAudioFrameClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, WavLMForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus-sd")
model = WavLMForAudioFrameClassification.from_pretrained("microsoft/wavlm-base-plus-sd")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
with torch.no_grad():
... logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
labels[0].tolist()
[0, 0]
WavLMForXVector
class transformers.WavLMForXVector
<
source
>
(
config
)
Parameters
config (WavLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
WavLM Model with an XVector feature extraction head on top for tasks like Speaker Verification.
WavLM was proposed in WavLM: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.XVectorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WavLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax.
embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The WavLMForXVector forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, WavLMForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/wavlm-base-plus-sv")
model = WavLMForXVector.from_pretrained("microsoft/wavlm-base-plus-sv")
# audio file is decoded on the fly
inputs = feature_extractor(
... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
with torch.no_grad():
... embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.7 # the optimal threshold is dataset-dependent
if similarity < threshold:
... print("Speakers are not the same!")
round(similarity.item(), 2)
0.97
←Wav2Vec2Phoneme
Whisper→
WavLM
Overview
Documentation resources
WavLMConfig
WavLMModel
WavLMForCTC
WavLMForSequenceClassification
WavLMForAudioFrameClassification
WavLMForXVector
|
SqueezeBERT
Overview
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It’s a
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
SqueezeBERT architecture is that SqueezeBERT uses grouped convolutions
instead of fully-connected layers for the Q, K, V and FFN layers.
The abstract from the paper is the following:
Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
large computing systems, and better neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today’s
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
set. The SqueezeBERT code will be released.
Tips:
SqueezeBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right
rather than the left.
SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
For best results when finetuning on sequence classification tasks, it is recommended to start with the
squeezebert/squeezebert-mnli-headless checkpoint.
This model was contributed by forresti.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
SqueezeBertConfig
class transformers.SqueezeBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
embedding_size = 768
q_groups = 4
k_groups = 4
v_groups = 4
post_attention_groups = 1
intermediate_groups = 4
output_groups = 4
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the SqueezeBERT model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling SqueezeBertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling BertModel or TFBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
pad_token_id (int, optional, defaults to 0) —
The ID of the token in the word embedding to use as padding.
embedding_size (int, optional, defaults to 768) —
The dimension of the word embedding vectors.
q_groups (int, optional, defaults to 4) —
The number of groups in Q layer.
k_groups (int, optional, defaults to 4) —
The number of groups in K layer.
v_groups (int, optional, defaults to 4) —
The number of groups in V layer.
post_attention_groups (int, optional, defaults to 1) —
The number of groups in the first feed forward network layer.
intermediate_groups (int, optional, defaults to 4) —
The number of groups in the second feed forward network layer.
output_groups (int, optional, defaults to 4) —
The number of groups in the third feed forward network layer.
This is the configuration class to store the configuration of a SqueezeBertModel. It is used to instantiate a
SqueezeBERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the SqueezeBERT
squeezebert/squeezebert-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import SqueezeBertConfig, SqueezeBertModel
# Initializing a SqueezeBERT configuration
configuration = SqueezeBertConfig()
# Initializing a model (with random weights) from the configuration above
model = SqueezeBertModel(configuration)
# Accessing the model configuration
configuration = model.config
Attributes: pretrained_config_archive_map (Dict[str, str]): A dictionary containing all the available pre-trained
checkpoints.
SqueezeBertTokenizer
class transformers.SqueezeBertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original SqueezeBERT).
Construct a SqueezeBERT tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A SqueezeBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A SqueezeBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
SqueezeBertTokenizerFast
class transformers.SqueezeBertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original SqueezeBERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” SqueezeBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A SqueezeBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A SqueezeBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
SqueezeBertModel
class transformers.SqueezeBertModel
<
source
>
(
config
)
Parameters
config (SqueezeBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SqueezeBERT Model transformer outputting raw hidden-states without any specific head on top.
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
squeezebert/squeezebert-mnli-headless checkpoint as a starting point.
Hierarchy:
Copied
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
Data layouts:
Copied
Input data is in [batch, sequence_length, hidden_size] format.
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SqueezeBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SqueezeBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SqueezeBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertModel.from_pretrained("squeezebert/squeezebert-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
SqueezeBertForMaskedLM
class transformers.SqueezeBertForMaskedLM
<
source
>
(
config
)
Parameters
config (SqueezeBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SqueezeBERT Model with a language modeling head on top.
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
squeezebert/squeezebert-mnli-headless checkpoint as a starting point.
Hierarchy:
Copied
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
Data layouts:
Copied
Input data is in [batch, sequence_length, hidden_size] format.
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SqueezeBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SqueezeBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SqueezeBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertForMaskedLM.from_pretrained("squeezebert/squeezebert-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
SqueezeBertForSequenceClassification
class transformers.SqueezeBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (SqueezeBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SqueezeBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
squeezebert/squeezebert-mnli-headless checkpoint as a starting point.
Hierarchy:
Copied
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
Data layouts:
Copied
Input data is in [batch, sequence_length, hidden_size] format.
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SqueezeBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SqueezeBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, SqueezeBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertForSequenceClassification.from_pretrained("squeezebert/squeezebert-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = SqueezeBertForSequenceClassification.from_pretrained("squeezebert/squeezebert-uncased", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, SqueezeBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertForSequenceClassification.from_pretrained("squeezebert/squeezebert-uncased", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = SqueezeBertForSequenceClassification.from_pretrained(
... "squeezebert/squeezebert-uncased", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
SqueezeBertForMultipleChoice
class transformers.SqueezeBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (SqueezeBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SqueezeBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
squeezebert/squeezebert-mnli-headless checkpoint as a starting point.
Hierarchy:
Copied
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
Data layouts:
Copied
Input data is in [batch, sequence_length, hidden_size] format.
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (see
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SqueezeBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SqueezeBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SqueezeBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertForMultipleChoice.from_pretrained("squeezebert/squeezebert-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
SqueezeBertForTokenClassification
class transformers.SqueezeBertForTokenClassification
<
source
>
(
config
)
Parameters
config (SqueezeBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SqueezeBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
squeezebert/squeezebert-mnli-headless checkpoint as a starting point.
Hierarchy:
Copied
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
Data layouts:
Copied
Input data is in [batch, sequence_length, hidden_size] format.
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SqueezeBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SqueezeBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SqueezeBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertForTokenClassification.from_pretrained("squeezebert/squeezebert-uncased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
SqueezeBertForQuestionAnswering
class transformers.SqueezeBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (SqueezeBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SqueezeBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W.
Keutzer
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
For best results finetuning SqueezeBERT on text classification tasks, it is recommended to use the
squeezebert/squeezebert-mnli-headless checkpoint as a starting point.
Hierarchy:
Copied
Internal class hierarchy:
SqueezeBertModel
SqueezeBertEncoder
SqueezeBertModule
SqueezeBertSelfAttention
ConvActivation
ConvDropoutLayerNorm
Data layouts:
Copied
Input data is in [batch, sequence_length, hidden_size] format.
Data inside the encoder is in [batch, hidden_size, sequence_length] format. But, if `output_hidden_states == True`, the data from inside the encoder is returned in [batch, sequence_length, hidden_size] format.
The final output of the encoder is in [batch, sequence_length, hidden_size] format.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SqueezeBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SqueezeBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SqueezeBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("squeezebert/squeezebert-uncased")
model = SqueezeBertForQuestionAnswering.from_pretrained("squeezebert/squeezebert-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←Splinter
SwitchTransformers→
SqueezeBERT
Overview
Documentation resources
SqueezeBertConfig
SqueezeBertTokenizer
SqueezeBertTokenizerFast
SqueezeBertModel
SqueezeBertForMaskedLM
SqueezeBertForSequenceClassification
SqueezeBertForMultipleChoice
SqueezeBertForTokenClassification
SqueezeBertForQuestionAnswering
|
InstructBLIP
Overview
The InstructBLIP model was proposed in InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
InstructBLIP leverages the BLIP-2 architecture for visual instruction tuning.
The abstract from the paper is the following:
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.
Tips:
InstructBLIP uses the same architecture as BLIP-2 with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
InstructBLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
InstructBlipConfig
class transformers.InstructBlipConfig
<
source
>
(
vision_config = None
qformer_config = None
text_config = None
num_query_tokens = 32
**kwargs
)
Parameters
vision_config (dict, optional) —
Dictionary of configuration options used to initialize InstructBlipVisionConfig.
qformer_config (dict, optional) —
Dictionary of configuration options used to initialize InstructBlipQFormerConfig.
text_config (dict, optional) —
Dictionary of configuration options used to initialize any PretrainedConfig.
num_query_tokens (int, optional, defaults to 32) —
The number of query tokens passed through the Transformer.
kwargs (optional) —
Dictionary of keyword arguments.
InstructBlipConfig is the configuration class to store the configuration of a
InstructBlipForConditionalGeneration. It is used to instantiate a InstructBLIP model according to the specified
arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with
the defaults will yield a similar configuration to that of the InstructBLIP
Salesforce/instruct-blip-flan-t5 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import (
... InstructBlipVisionConfig,
... InstructBlipQFormerConfig,
... OPTConfig,
... InstructBlipConfig,
... InstructBlipForConditionalGeneration,
... )
# Initializing a InstructBlipConfig with Salesforce/instruct-blip-flan-t5 style configuration
configuration = InstructBlipConfig()
# Initializing a InstructBlipForConditionalGeneration (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
model = InstructBlipForConditionalGeneration(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a InstructBlipConfig from a InstructBlipVisionConfig, InstructBlipQFormerConfig and any PretrainedConfig
# Initializing InstructBLIP vision, InstructBLIP Q-Former and language model configurations
vision_config = InstructBlipVisionConfig()
qformer_config = InstructBlipQFormerConfig()
text_config = OPTConfig()
config = InstructBlipConfig.from_text_vision_configs(vision_config, qformer_config, text_config)
from_vision_qformer_text_configs
<
source
>
(
vision_config: InstructBlipVisionConfig
qformer_config: InstructBlipQFormerConfig
text_config: PretrainedConfig
**kwargs
)
→
InstructBlipConfig
Returns
InstructBlipConfig
An instance of a configuration object
Instantiate a InstructBlipConfig (or a derived class) from a InstructBLIP vision model, Q-Former and
language model configurations.
InstructBlipVisionConfig
class transformers.InstructBlipVisionConfig
<
source
>
(
hidden_size = 1408
intermediate_size = 6144
num_hidden_layers = 39
num_attention_heads = 16
image_size = 224
patch_size = 14
hidden_act = 'gelu'
layer_norm_eps = 1e-06
attention_dropout = 0.0
initializer_range = 1e-10
qkv_bias = True
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 1408) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 6144) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 39) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 14) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"gelu" are supported. to 1e-5): The epsilon used by the layer
normalization layers.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 1e-10) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries and values in the self-attention layers.
This is the configuration class to store the configuration of a InstructBlipVisionModel. It is used to
instantiate a InstructBLIP vision encoder according to the specified arguments, defining the model architecture.
Instantiating a configuration defaults will yield a similar configuration to that of the InstructBLIP
Salesforce/instruct-blip-flan-t5 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import InstructBlipVisionConfig, InstructBlipVisionModel
# Initializing a InstructBlipVisionConfig with Salesforce/instruct-blip-flan-t5 style configuration
configuration = InstructBlipVisionConfig()
# Initializing a InstructBlipVisionModel (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
model = InstructBlipVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
InstructBlipQFormerConfig
class transformers.InstructBlipQFormerConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
cross_attention_frequency = 2
encoder_hidden_size = 1408
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling the model.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
cross_attention_frequency (int, optional, defaults to 2) —
The frequency of adding cross-attention to the Transformer layers.
encoder_hidden_size (int, optional, defaults to 1408) —
The hidden size of the hidden states for cross-attention.
This is the configuration class to store the configuration of a InstructBlipQFormerModel. It is used to
instantiate a InstructBLIP Querying Transformer (Q-Former) model according to the specified arguments, defining the
model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of
the InstructBLIP Salesforce/instruct-blip-flan-t5
architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs.
Read the documentation from PretrainedConfig for more information.
Note that InstructBlipQFormerModel is very similar to BertLMHeadModel with interleaved cross-attention.
Examples:
Copied
from transformers import InstructBlipQFormerConfig, InstructBlipQFormerModel
# Initializing a InstructBLIP Salesforce/instruct-blip-flan-t5 style configuration
configuration = InstructBlipQFormerConfig()
# Initializing a model (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
model = InstructBlipQFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
InstructBlipProcessor
class transformers.InstructBlipProcessor
<
source
>
(
image_processor
tokenizer
qformer_tokenizer
)
Parameters
image_processor (BlipImageProcessor) —
An instance of BlipImageProcessor. The image processor is a required input.
tokenizer (AutoTokenizer) —
An instance of [‘PreTrainedTokenizer`]. The tokenizer is a required input.
qformer_tokenizer (AutoTokenizer) —
An instance of [‘PreTrainedTokenizer`]. The Q-Former tokenizer is a required input.
Constructs an InstructBLIP processor which wraps a BLIP image processor and a LLaMa/T5 tokenizer into a single
processor.
InstructBlipProcessor offers all the functionalities of BlipImageProcessor and AutoTokenizer. See the
docstring of __call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to PreTrainedTokenizer’s decode(). Please refer
to the docstring of this method for more information.
InstructBlipVisionModel
class transformers.InstructBlipVisionModel
<
source
>
(
config: InstructBlipVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using InstructBlipProcessor. See
InstructBlipProcessor.__call__() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.instructblip.configuration_instructblip.InstructBlipVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The InstructBlipVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
InstructBlipQFormerModel
class transformers.InstructBlipQFormerModel
<
source
>
(
config: InstructBlipQFormerConfig
)
Querying Transformer (Q-Former), used in InstructBLIP. Slightly modified from BLIP-2 as it also takes the
instruction as input.
forward
<
source
>
(
input_ids
attention_mask = None
position_ids = None
query_embeds = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
past_key_values = None
use_cache = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of:
shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)): Contains precomputed key and
value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are
used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key
value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape
(batch_size, sequence_length).
use_cache (bool, optional):
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
InstructBlipForConditionalGeneration
class transformers.InstructBlipForConditionalGeneration
<
source
>
(
config: InstructBlipConfig
)
Parameters
config (InstructBlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
InstructBLIP Model for generating text given an image and an optional text prompt. The model consists of a vision
encoder, Querying Transformer (Q-Former) and a language model.
One can optionally pass input_ids to the model, which serve as a text prompt, to make the language model continue
the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: FloatTensor
qformer_input_ids: FloatTensor
qformer_attention_mask: typing.Optional[torch.LongTensor] = None
input_ids: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.instructblip.modeling_instructblip.InstructBlipForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using InstructBlipProcessor. See
InstructBlipProcessor.__call__() for details.
qformer_input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of input sequence tokens in the vocabulary of the Q-Former. Input tokens can optionally be provided
to serve as text prompt, which the Q-Former model will encode.
Indices can be obtained using InstructBlipProcessor. See InstructBlipProcessor.__call__() for
details.
What are input IDs?
qformer_attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be
provided to serve as text prompt, which the language model can continue.
Indices can be obtained using InstructBlipProcessor. See InstructBlipProcessor.__call__() for
details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an
encoder-decoder language model (like T5) is used.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are decoder input IDs?
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
Only relevant in case an encoder-decoder language model (like T5) is used.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.models.instructblip.modeling_instructblip.InstructBlipForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
A transformers.models.instructblip.modeling_instructblip.InstructBlipForConditionalGenerationModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.instructblip.configuration_instructblip.InstructBlipVisionConfig'>) and inputs.
loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Language modeling loss from the language model.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model.
vision_outputs (BaseModelOutputWithPooling) — Outputs of the vision encoder.
qformer_outputs (BaseModelOutputWithPoolingAndCrossAttentions) — Outputs of the Q-Former (Querying Transformer).
language_model_outputs (CausalLMOutputWithPast or Seq2SeqLMOutput) — Outputs of the language model.
The InstructBlipForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration
import torch
from PIL import Image
import requests
model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b")
processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
prompt = "What is unusual about this image?"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(device)
outputs = model.generate(
... **inputs,
... do_sample=False,
... num_beams=5,
... max_length=256,
... min_length=1,
... top_p=0.9,
... repetition_penalty=1.5,
... length_penalty=1.0,
... temperature=1,
... )
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
The unusual aspect of this image is that a man is ironing clothes on the back of a yellow SUV, which is parked in the middle of a busy city street. This is an unconventional approach to ironing clothes, as it requires the man to balance himself and his ironing equipment on top of the vehicle while navigating through traffic. Additionally, the presence of taxis and other vehicles in the scene further emphasizes the unusual nature of this situation.
generate
<
source
>
(
pixel_values: FloatTensor
qformer_input_ids: typing.Optional[torch.LongTensor] = None
qformer_attention_mask: typing.Optional[torch.LongTensor] = None
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
**generate_kwargs
)
→
captions (list)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Input images to be processed.
qformer_input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt to be fed to the Q-Former module.
qformer_attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices.
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt for the generation.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices.
Returns
captions (list)
A list of strings of length batch_size * num_captions.
Overrides generate function to be able to use the model as a conditional generator.
←GroupViT
LayoutLM→
InstructBLIP
Overview
InstructBlipConfig
InstructBlipVisionConfig
InstructBlipQFormerConfig
InstructBlipProcessor
InstructBlipVisionModel
InstructBlipQFormerModel
InstructBlipForConditionalGeneration
|
PoolFormer
Overview
The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture MetaFormer.
The abstract from the paper is the following:
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model’s performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of “MetaFormer”, a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
The figure below illustrates the architecture of PoolFormer. Taken from the original paper.
Tips:
PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the hub.
One can use PoolFormerImageProcessor to prepare images for the model.
As most models, PoolFormer comes in different sizes, the details of which can be found in the table below.
Model variant
Depths
Hidden sizes
Params (M)
ImageNet-1k Top 1
s12
[2, 2, 6, 2]
[64, 128, 320, 512]
12
77.2
s24
[4, 4, 12, 4]
[64, 128, 320, 512]
21
80.3
s36
[6, 6, 18, 6]
[64, 128, 320, 512]
31
81.4
m36
[6, 6, 18, 6]
[96, 192, 384, 768]
56
82.1
m48
[8, 8, 24, 8]
[96, 192, 384, 768]
73
82.5
This model was contributed by heytanay. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with PoolFormer.
Image Classification
PoolFormerForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
PoolFormerConfig
class transformers.PoolFormerConfig
<
source
>
(
num_channels = 3
patch_size = 16
stride = 16
pool_size = 3
mlp_ratio = 4.0
depths = [2, 2, 6, 2]
hidden_sizes = [64, 128, 320, 512]
patch_sizes = [7, 3, 3, 3]
strides = [4, 2, 2, 2]
padding = [2, 1, 1, 1]
num_encoder_blocks = 4
drop_path_rate = 0.0
hidden_act = 'gelu'
use_layer_scale = True
layer_scale_init_value = 1e-05
initializer_range = 0.02
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of channels in the input image.
patch_size (int, optional, defaults to 16) —
The size of the input patch.
stride (int, optional, defaults to 16) —
The stride of the input patch.
pool_size (int, optional, defaults to 3) —
The size of the pooling window.
mlp_ratio (float, optional, defaults to 4.0) —
The ratio of the number of channels in the output of the MLP to the number of channels in the input.
depths (list, optional, defaults to [2, 2, 6, 2]) —
The depth of each encoder block.
hidden_sizes (list, optional, defaults to [64, 128, 320, 512]) —
The hidden sizes of each encoder block.
patch_sizes (list, optional, defaults to [7, 3, 3, 3]) —
The size of the input patch for each encoder block.
strides (list, optional, defaults to [4, 2, 2, 2]) —
The stride of the input patch for each encoder block.
padding (list, optional, defaults to [2, 1, 1, 1]) —
The padding of the input patch for each encoder block.
num_encoder_blocks (int, optional, defaults to 4) —
The number of encoder blocks.
drop_path_rate (float, optional, defaults to 0.0) —
The dropout rate for the dropout layers.
hidden_act (str, optional, defaults to "gelu") —
The activation function for the hidden layers.
use_layer_scale (bool, optional, defaults to True) —
Whether to use layer scale.
layer_scale_init_value (float, optional, defaults to 1e-5) —
The initial value for the layer scale.
initializer_range (float, optional, defaults to 0.02) —
The initializer range for the weights.
This is the configuration class to store the configuration of PoolFormerModel. It is used to instantiate a
PoolFormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the PoolFormer
sail/poolformer_s12 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import PoolFormerConfig, PoolFormerModel
# Initializing a PoolFormer sail/poolformer_s12 style configuration
configuration = PoolFormerConfig()
# Initializing a model (with random weights) from the sail/poolformer_s12 style configuration
model = PoolFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
PoolFormerFeatureExtractor
class transformers.PoolFormerFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
PoolFormerImageProcessor
class transformers.PoolFormerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
crop_pct: int = 0.9
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_rescale: bool = True
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the image after resizing. Can be overridden by size in the preprocess method. If crop_pct is
unset:
size is {"height": h, "width": w}: the image is resized to (h, w).
size is {"shortest_edge": s}: the shortest edge of the image is resized to s whilst maintaining the
aspect ratio.
If crop_pct is set:
size is {"height": h, "width": w}: the image is resized to (int(floor(h/crop_pct)), int(floor(w/crop_pct)))
size is {"height": c, "width": c}: the shortest edge of the image is resized to int(floor(c/crop_pct)
whilst maintaining the aspect ratio.
size is {"shortest_edge": c}: the shortest edge of the image is resized to int(floor(c/crop_pct)
whilst maintaining the aspect ratio.
crop_pct (float, optional, defaults to 0.9) —
Percentage of the image to crop from the center. Can be overridden by crop_pct in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image
is padded with 0’s and then center cropped. Can be overridden by do_center_crop in the preprocess
method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after applying center crop. Only has an effect if do_center_crop is set to True. Can
be overridden by the crop_size parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize (bool, optional, defaults to True) —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a PoolFormer image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
crop_pct: int = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after applying resize.
crop_pct (float, optional, defaults to self.crop_pct) —
Percentage of the image to crop. Only has an effect if do_resize is set to True.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after applying center crop.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
PoolFormerModel
class transformers.PoolFormerModel
<
source
>
(
config
)
Parameters
config (PoolFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare PoolFormer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
PoolFormerImageProcessor.call() for details.
Returns
transformers.modeling_outputs.BaseModelOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PoolFormerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The PoolFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, PoolFormerModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("sail/poolformer_s12")
model = PoolFormerModel.from_pretrained("sail/poolformer_s12")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 512, 7, 7]
PoolFormerForImageClassification
class transformers.PoolFormerForImageClassification
<
source
>
(
config
)
Parameters
config (PoolFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
PoolFormer Model transformer with an image classification head on top
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
PoolFormerImageProcessor.call() for details.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PoolFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The PoolFormerForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, PoolFormerForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("sail/poolformer_s12")
model = PoolFormerForImageClassification.from_pretrained("sail/poolformer_s12")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←NAT
RegNet→
PoolFormer
Overview
Resources
PoolFormerConfig
PoolFormerFeatureExtractor
PoolFormerImageProcessor
PoolFormerModel
PoolFormerForImageClassification
|
PEGASUS-X
Overview
The PEGASUS-X model was proposed in Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang, Yao Zhao and Peter J. Liu.
PEGASUS-X (PEGASUS eXtended) extends the PEGASUS models for long input summarization through additional long input pretraining and using staggered block-local attention with global tokens in the encoder.
The abstract from the paper is the following:
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.
Tips:
PEGASUS-X uses the same tokenizer as PEGASUS.
This model was contributed by zphang. The original code can be found here.
Documentation resources
Translation task guide
Summarization task guide
PegasusXConfig
class transformers.PegasusXConfig
<
source
>
(
vocab_size = 96103
max_position_embeddings = 16384
encoder_layers = 16
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 16
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 0
scale_embedding = True
pad_token_id = 0
eos_token_id = 1
forced_eos_token_id = 1
num_global_tokens = 32
block_size = 512
stagger_local_blocks = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 96103) —
Vocabulary size of the PEGASUS-X model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling PegasusXModel.
d_model (int, optional, defaults to 1024) —
Dimension of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 16) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 16) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int, optional, defaults to 16384) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 1) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
num_global_tokens (int, optional, defaults to 128) —
Number of global tokens to use for the encoder
block_size (int, optional, defaults to 512) —
Block size for encoder local attention. Sequence length should be an exact multiple of block size.
block_size must be a multiple of 2 if stagger_local_block is True
stagger_local_block (bool, optional, defaults to True) —
Whether to stagger every other local attention by half a block
This is the configuration class to store the configuration of a PegasusXModel. It is used to instantiate a
PEGASUS-X model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the PEGASUS-X
google/pegasus-x-large architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import PegasusXConfig, PegasusXModel
# Initializing a PEGASUS google/pegasus-x-large style configuration
configuration = PegasusXConfig()
# Initializing a model (with random weights) from the google/pegasus-x-large style configuration
model = PegasusXModel(configuration)
# Accessing the model configuration
configuration = model.config
PegasusXModel
class transformers.PegasusXModel
<
source
>
(
config: PegasusXConfig
)
Parameters
config (PegasusXConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare PEGASUS-X Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
PEGASUS-X uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusXConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PegasusXModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, PegasusModel
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-x-large")
model = PegasusModel.from_pretrained("google/pegasus-x-large")
inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
decoder_inputs = tokenizer("Studies show that", return_tensors="pt")
outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 4, 1024]
PegasusXForConditionalGeneration
class transformers.PegasusXForConditionalGeneration
<
source
>
(
config: PegasusXConfig
)
Parameters
config (PegasusXConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The PEGASUS-X for conditional generation (e.g. summarization).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
PEGASUS-X uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusXConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PegasusXForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, PegasusXForConditionalGeneration
model = PegasusXForConditionalGeneration.from_pretrained("google/pegasus-x-base")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-x-large")
ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
inputs = tokenizer(ARTICLE_TO_SUMMARIZE, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"California's largest electricity provider has turned off power to hundreds of thousands of customers."
←Pegasus
PhoBERT→
PEGASUS-X
Overview
Documentation resources
PegasusXConfig
PegasusXModel
PegasusXForConditionalGeneration
|
BARThez
Overview
The BARThez model was proposed in BARThez: a Skilled Pretrained French Sequence-to-Sequence Model by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct,
2020.
The abstract of the paper:
Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing
(NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language
understanding tasks. While there are some notable exceptions, most of the available models and research have been
conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language
(to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research
that we adapted to suit BART’s perturbation schemes. Unlike already existing BERT-based French language models such as
CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also
its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel
summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already
pretrained multilingual BART on BARThez’s corpus, and we show that the resulting model, which we call mBARTHez,
provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.
This model was contributed by moussakam. The Authors’ code can be found here.
Examples
BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check:
examples/pytorch/summarization/.
BarthezTokenizer
class transformers.BarthezTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from CamembertTokenizer and BartTokenizer. Construct a BARThez tokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BARThez sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
BarthezTokenizerFast
class transformers.BarthezTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
Adapted from CamembertTokenizer and BartTokenizer. Construct a “fast” BARThez tokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BARThez sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task.
←BART
BARTpho→
BARThez
Overview
Examples
BarthezTokenizer
BarthezTokenizerFast
|
CLIPSeg
Overview
The CLIPSeg model was proposed in Image Segmentation Using Text and Image Prompts by Timo Lüddecke
and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen CLIP model for zero- and one-shot image segmentation.
The abstract from the paper is the following:
Image segmentation is usually addressed by training a
model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive
as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system
that can generate image segmentations based on arbitrary
prompts at test time. A prompt can be either a text or an
image. This approach enables us to create a unified model
(trained once) for three common segmentation tasks, which
come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation.
We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense
prediction. After training on an extended version of the
PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on
an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail.
This novel hybrid input allows for dynamic adaptation not
only to the three segmentation tasks mentioned above, but
to any binary segmentation task where a text or image query
can be formulated. Finally, we find our system to adapt well
to generalized queries involving affordances or properties
Tips:
CLIPSegForImageSegmentation adds a decoder on top of CLIPSegModel. The latter is identical to CLIPModel.
CLIPSegForImageSegmentation can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text
(provided to the model as input_ids) or an image (provided to the model as conditional_pixel_values). One can also provide custom
conditional embeddings (provided to the model as conditional_embeddings).
CLIPSeg overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIPSeg. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Image Segmentation
A notebook that illustrates zero-shot image segmentation with CLIPSeg.
CLIPSegConfig
class transformers.CLIPSegConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 512
logit_scale_init_value = 2.6592
extract_layers = [3, 6, 9]
reduce_dim = 64
decoder_num_attention_heads = 4
decoder_attention_dropout = 0.0
decoder_hidden_act = 'quick_gelu'
decoder_intermediate_size = 2048
conditional_layer = 0
use_complex_transposed_convolution = False
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize CLIPSegTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize CLIPSegVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimensionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original CLIPSeg implementation.
extract_layers (List[int], optional, defaults to [3, 6, 9]) —
Layers to extract when forwarding the query image through the frozen visual backbone of CLIP.
reduce_dim (int, optional, defaults to 64) —
Dimensionality to reduce the CLIP vision embedding.
decoder_num_attention_heads (int, optional, defaults to 4) —
Number of attention heads in the decoder of CLIPSeg.
decoder_attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
decoder_hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
decoder_intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layers in the Transformer decoder.
conditional_layer (int, optional, defaults to 0) —
The layer to use of the Transformer encoder whose activations will be combined with the condition
embeddings using FiLM (Feature-wise Linear Modulation). If 0, the last layer is used.
use_complex_transposed_convolution (bool, optional, defaults to False) —
Whether to use a more complex transposed convolution in the decoder, enabling more fine-grained
segmentation.
kwargs (optional) —
Dictionary of keyword arguments.
CLIPSegConfig is the configuration class to store the configuration of a CLIPSegModel. It is used to
instantiate a CLIPSeg model according to the specified arguments, defining the text model and vision model configs.
Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg
CIDAS/clipseg-rd64 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CLIPSegConfig, CLIPSegModel
# Initializing a CLIPSegConfig with CIDAS/clipseg-rd64 style configuration
configuration = CLIPSegConfig()
# Initializing a CLIPSegModel (with random weights) from the CIDAS/clipseg-rd64 style configuration
model = CLIPSegModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a CLIPSegConfig from a CLIPSegTextConfig and a CLIPSegVisionConfig
# Initializing a CLIPSegText and CLIPSegVision configuration
config_text = CLIPSegTextConfig()
config_vision = CLIPSegVisionConfig()
config = CLIPSegConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: CLIPSegTextConfig
vision_config: CLIPSegVisionConfig
**kwargs
)
→
CLIPSegConfig
Returns
CLIPSegConfig
An instance of a configuration object
Instantiate a CLIPSegConfig (or a derived class) from clipseg text model configuration and clipseg vision
model configuration.
CLIPSegTextConfig
class transformers.CLIPSegTextConfig
<
source
>
(
vocab_size = 49408
hidden_size = 512
intermediate_size = 2048
num_hidden_layers = 12
num_attention_heads = 8
max_position_embeddings = 77
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 49408) —
Vocabulary size of the CLIPSeg text model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling CLIPSegModel.
hidden_size (int, optional, defaults to 512) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 77) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a CLIPSegModel. It is used to instantiate an
CLIPSeg model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the CLIPSeg
CIDAS/clipseg-rd64 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CLIPSegTextConfig, CLIPSegTextModel
# Initializing a CLIPSegTextConfig with CIDAS/clipseg-rd64 style configuration
configuration = CLIPSegTextConfig()
# Initializing a CLIPSegTextModel (with random weights) from the CIDAS/clipseg-rd64 style configuration
model = CLIPSegTextModel(configuration)
# Accessing the model configuration
configuration = model.config
CLIPSegVisionConfig
class transformers.CLIPSegVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 224
patch_size = 32
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a CLIPSegModel. It is used to instantiate an
CLIPSeg model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the CLIPSeg
CIDAS/clipseg-rd64 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CLIPSegVisionConfig, CLIPSegVisionModel
# Initializing a CLIPSegVisionConfig with CIDAS/clipseg-rd64 style configuration
configuration = CLIPSegVisionConfig()
# Initializing a CLIPSegVisionModel (with random weights) from the CIDAS/clipseg-rd64 style configuration
model = CLIPSegVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
CLIPSegProcessor
class transformers.CLIPSegProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (ViTImageProcessor) —
The image processor is a required input.
tokenizer (CLIPTokenizerFast) —
The tokenizer is a required input.
Constructs a CLIPSeg processor which wraps a CLIPSeg image processor and a CLIP tokenizer into a single processor.
CLIPSegProcessor offers all the functionalities of ViTImageProcessor and CLIPTokenizerFast. See the
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
CLIPSegModel
class transformers.CLIPSegModel
<
source
>
(
config: CLIPSegConfig
)
Parameters
config (CLIPSegConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clipseg.modeling_clipseg.CLIPSegOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clipseg.modeling_clipseg.CLIPSegOutput or tuple(torch.FloatTensor)
A transformers.models.clipseg.modeling_clipseg.CLIPSegOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of CLIPSegTextModel.
image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
CLIPSegVisionModel.
text_model_output(BaseModelOutputWithPooling):
The output of the CLIPSegTextModel.
vision_model_output(BaseModelOutputWithPooling):
The output of the CLIPSegVisionModel.
The CLIPSegModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPSegModel
processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of CLIPSegTextModel.
The CLIPSegModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, CLIPSegModel
tokenizer = AutoTokenizer.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of CLIPSegVisionModel.
The CLIPSegModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPSegModel
processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
CLIPSegTextModel
class transformers.CLIPSegTextModel
<
source
>
(
config: CLIPSegTextConfig
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CLIPSegTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, CLIPSegTextModel
tokenizer = AutoTokenizer.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegTextModel.from_pretrained("CIDAS/clipseg-rd64-refined")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
CLIPSegVisionModel
class transformers.CLIPSegVisionModel
<
source
>
(
config: CLIPSegVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CLIPSegVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPSegVisionModel
processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegVisionModel.from_pretrained("CIDAS/clipseg-rd64-refined")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
CLIPSegForImageSegmentation
class transformers.CLIPSegForImageSegmentation
<
source
>
(
config: CLIPSegConfig
)
Parameters
config (CLIPSegConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CLIPSeg model with a Transformer-based decoder on top for zero-shot and one-shot image segmentation.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.FloatTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
conditional_pixel_values: typing.Optional[torch.FloatTensor] = None
conditional_embeddings: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegTextConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
…
vision_model_output (BaseModelOutputWithPooling) — The output of the CLIPSegVisionModel.
The CLIPSegForImageSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, CLIPSegForImageSegmentation
from PIL import Image
import requests
processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["a cat", "a remote", "a blanket"]
inputs = processor(text=texts, images=[image] * len(texts), padding=True, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
print(logits.shape)
torch.Size([3, 352, 352])
←CLIP
Data2Vec→
CLIPSeg
Overview
Resources
CLIPSegConfig
CLIPSegTextConfig
CLIPSegVisionConfig
CLIPSegProcessor
CLIPSegModel
CLIPSegTextModel
CLIPSegVisionModel
CLIPSegForImageSegmentation
|
LayoutXLM
Overview
LayoutXLM was proposed in LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha
Zhang, Furu Wei. It’s a multilingual extension of the LayoutLMv2 model trained
on 53 languages.
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document
understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In
this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to
bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also
introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in
7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled
for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA
cross-lingual pre-trained models on the XFUN dataset.
One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so:
Copied
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")
Note that LayoutXLM has its own tokenizer, based on
LayoutXLMTokenizer/LayoutXLMTokenizerFast. You can initialize it as
follows:
Copied
from transformers import LayoutXLMTokenizer
tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")
Similar to LayoutLMv2, you can use LayoutXLMProcessor (which internally applies
LayoutLMv2ImageProcessor and
LayoutXLMTokenizer/LayoutXLMTokenizerFast in sequence) to prepare all
data for the model.
As LayoutXLM’s architecture is equivalent to that of LayoutLMv2, one can refer to LayoutLMv2’s documentation page for all tips, code examples and notebooks.
This model was contributed by nielsr. The original code can be found here.
LayoutXLMTokenizer
class transformers.LayoutXLMTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
cls_token_box = [0, 0, 0, 0]
sep_token_box = [1000, 1000, 1000, 1000]
pad_token_box = [0, 0, 0, 0]
pad_token_label = -100
only_label_first_subword = True
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
cls_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [CLS] token.
sep_token_box (List[int], optional, defaults to [1000, 1000, 1000, 1000]) —
The bounding box to use for the special [SEP] token.
pad_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [PAD] token.
pad_token_label (int, optional, defaults to -100) —
The label to use for padding tokens. Defaults to -100, which is the ignore_index of PyTorch’s
CrossEntropyLoss.
only_label_first_subword (bool, optional, defaults to True) —
Whether or not to only label the first subword, in case word labels are provided.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]]
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings
(words of a single example or questions of a batch of examples) or a list of list of strings (batch of
words).
text_pair (List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence should be a list of strings
(pretokenized string).
boxes (List[List[int]], List[List[List[int]]]) —
Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale.
word_labels (List[int], List[List[int]], optional) —
Word-level integer labels (for token classification tasks such as FUNSD, CORD).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
bbox — List of bounding boxes to be fed to a model.
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
labels — List of labels to be fed to a model. (when word_labels is specified).
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True).
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM-RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does
not make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LayoutXLMTokenizerFast
class transformers.LayoutXLMTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
cls_token_box = [0, 0, 0, 0]
sep_token_box = [1000, 1000, 1000, 1000]
pad_token_box = [0, 0, 0, 0]
pad_token_label = -100
only_label_first_subword = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
cls_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [CLS] token.
sep_token_box (List[int], optional, defaults to [1000, 1000, 1000, 1000]) —
The bounding box to use for the special [SEP] token.
pad_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [PAD] token.
pad_token_label (int, optional, defaults to -100) —
The label to use for padding tokens. Defaults to -100, which is the ignore_index of PyTorch’s
CrossEntropyLoss.
only_label_first_subword (bool, optional, defaults to True) —
Whether or not to only label the first subword, in case word labels are provided.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
Construct a “fast” LayoutXLM tokenizer (backed by HuggingFace’s tokenizers library). Adapted from
RobertaTokenizer and XLNetTokenizer. Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]]
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings
(words of a single example or questions of a batch of examples) or a list of list of strings (batch of
words).
text_pair (List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence should be a list of strings
(pretokenized string).
boxes (List[List[int]], List[List[List[int]]]) —
Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale.
word_labels (List[int], List[List[int]], optional) —
Word-level integer labels (for token classification tasks such as FUNSD, CORD).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
bbox — List of bounding boxes to be fed to a model.
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
labels — List of labels to be fed to a model. (when word_labels is specified).
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True).
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
LayoutXLMProcessor
class transformers.LayoutXLMProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (LayoutLMv2ImageProcessor) —
An instance of LayoutLMv2ImageProcessor. The image processor is a required input.
tokenizer (LayoutXLMTokenizer or LayoutXLMTokenizerFast) —
An instance of LayoutXLMTokenizer or LayoutXLMTokenizerFast. The tokenizer is a required input.
Constructs a LayoutXLM processor which combines a LayoutXLM image processor and a LayoutXLM tokenizer into a single
processor.
LayoutXLMProcessor offers all the functionalities you need to prepare data for the model.
It first uses LayoutLMv2ImageProcessor to resize document images to a fixed size, and optionally applies OCR to
get words and normalized bounding boxes. These are then provided to LayoutXLMTokenizer or
LayoutXLMTokenizerFast, which turns the words and bounding boxes into token-level input_ids,
attention_mask, token_type_ids, bbox. Optionally, one can provide integer word_labels, which are turned
into token-level labels for token classification tasks (such as FUNSD, CORD).
__call__
<
source
>
(
images
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
This method first forwards the images argument to ~LayoutLMv2ImagePrpcessor.__call__. In case
LayoutLMv2ImagePrpcessor was initialized with apply_ocr set to True, it passes the obtained words and
bounding boxes along with the additional arguments to call() and returns the output,
together with resized images. In case LayoutLMv2ImagePrpcessor was initialized with apply_ocr set to
False, it passes the words (text/text_pair`) and `boxes` specified by the user along with the additional arguments to [__call__()](/docs/transformers/v4.31.0/en/model_doc/layoutxlm#transformers.LayoutXLMTokenizer.__call__) and returns the output, together with resized `images.
Please refer to the docstring of the above two methods for more information.
←LayoutLMV3
LiLT→
LayoutXLM
Overview
LayoutXLMTokenizer
LayoutXLMTokenizerFast
LayoutXLMProcessor
|
Mask2Former
Overview
The Mask2Former model was proposed in Masked-attention Mask Transformer for Universal Image Segmentation by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over MaskFormer.
The abstract from the paper is the following:
Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice
of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).
Tips:
Mask2Former uses the same preprocessing and postprocessing steps as MaskFormer. Use Mask2FormerImageProcessor or AutoImageProcessor to prepare images and optional targets for the model.
To get the final segmentation, depending on the task, you can call post_process_semantic_segmentation() or post_process_instance_segmentation() or post_process_panoptic_segmentation(). All three tasks can be solved using Mask2FormerForUniversalSegmentation output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together.
Mask2Former architecture. Taken from the original paper.
This model was contributed by Shivalika Singh and Alara Dirik. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mask2Former.
Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found here.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
MaskFormer specific outputs
class transformers.models.mask2former.modeling_mask2former.Mask2FormerModelOutput
<
source
>
(
encoder_last_hidden_state: FloatTensor = None
pixel_decoder_last_hidden_state: FloatTensor = None
transformer_decoder_last_hidden_state: FloatTensor = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
pixel_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
transformer_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
transformer_decoder_intermediate_states: typing.Tuple[torch.FloatTensor] = None
masks_queries_logits: typing.Tuple[torch.FloatTensor] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width), optional) —
Last hidden states (final feature map) of the last stage of the encoder model (backbone). Returned when
output_hidden_states=True is passed.
encoder_hidden_states (tuple(torch.FloatTensor), optional) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage. Returned when output_hidden_states=True is passed.
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width), optional) —
Last hidden states (final feature map) of the last stage of the pixel decoder model.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), , optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage. Returned when output_hidden_states=True is passed.
transformer_decoder_last_hidden_state (tuple(torch.FloatTensor)) —
Final output of the transformer decoder (batch_size, sequence_length, hidden_size).
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage. Returned when output_hidden_states=True is passed.
transformer_decoder_intermediate_states (tuple(torch.FloatTensor) of shape (num_queries, 1, hidden_size)) —
Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
masks_queries_logits (tuple(torch.FloatTensor) of shape (batch_size, num_queries, height, width)) —
Mask Predictions from each layer in the transformer decoder.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed) —
Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self attentions weights from transformer decoder.
Class for outputs of Mask2FormerModel. This class returns all the needed hidden states to compute the logits.
class transformers.models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
class_queries_logits: FloatTensor = None
masks_queries_logits: FloatTensor = None
auxiliary_logits: typing.Union[typing.List[typing.Dict[str, torch.FloatTensor]], NoneType] = None
encoder_last_hidden_state: FloatTensor = None
pixel_decoder_last_hidden_state: FloatTensor = None
transformer_decoder_last_hidden_state: FloatTensor = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
pixel_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
transformer_decoder_hidden_states: typing.Optional[torch.FloatTensor] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.Tensor, optional) —
The computed loss, returned when labels are present.
class_queries_logits (torch.FloatTensor) —
A tensor of shape (batch_size, num_queries, num_labels + 1) representing the proposed classes for each
query. Note the + 1 is needed because we incorporate the null class.
masks_queries_logits (torch.FloatTensor) —
A tensor of shape (batch_size, num_queries, height, width) representing the proposed masks for each
query.
auxiliary_logits (List[Dict(str, torch.FloatTensor)], optional) —
List of class and mask predictions from each layer of the transformer decoder.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Last hidden states (final feature map) of the last stage of the encoder model (backbone).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Last hidden states (final feature map) of the last stage of the pixel decoder model.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_last_hidden_state (tuple(torch.FloatTensor)) —
Final output of the transformer decoder (batch_size, sequence_length, hidden_size).
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self and Cross Attentions weights from transformer decoder.
Class for outputs of Mask2FormerForUniversalSegmentationOutput.
This output can be directly passed to post_process_semantic_segmentation() or
post_process_instance_segmentation() or
post_process_panoptic_segmentation() to compute final segmentation maps. Please, see
[`~Mask2FormerImageProcessor] for details regarding usage.
Mask2FormerConfig
class transformers.Mask2FormerConfig
<
source
>
(
backbone_config: typing.Optional[typing.Dict] = None
feature_size: int = 256
mask_feature_size: int = 256
hidden_dim: int = 256
encoder_feedforward_dim: int = 1024
activation_function: str = 'relu'
encoder_layers: int = 6
decoder_layers: int = 10
num_attention_heads: int = 8
dropout: float = 0.0
dim_feedforward: int = 2048
pre_norm: bool = False
enforce_input_projection: bool = False
common_stride: int = 4
ignore_value: int = 255
num_queries: int = 100
no_object_weight: float = 0.1
class_weight: float = 2.0
mask_weight: float = 5.0
dice_weight: float = 5.0
train_num_points: int = 12544
oversample_ratio: float = 3.0
importance_sample_ratio: float = 0.75
init_std: float = 0.02
init_xavier_std: float = 1.0
use_auxiliary_loss: bool = True
feature_strides: typing.List[int] = [4, 8, 16, 32]
output_auxiliary_logits: bool = None
**kwargs
)
Parameters
backbone_config (PretrainedConfig or dict, optional, defaults to SwinConfig()) —
The configuration of the backbone model. If unset, the configuration corresponding to
swin-base-patch4-window12-384 will be used.
feature_size (int, optional, defaults to 256) —
The features (channels) of the resulting feature maps.
mask_feature_size (int, optional, defaults to 256) —
The masks’ features size, this value will also be used to specify the Feature Pyramid Network features’
size.
hidden_dim (int, optional, defaults to 256) —
Dimensionality of the encoder layers.
encoder_feedforward_dim (int, optional, defaults to 1024) —
Dimension of feedforward network for deformable detr encoder used as part of pixel decoder.
encoder_layers (int, optional, defaults to 6) —
Number of layers in the deformable detr encoder used as part of pixel decoder.
decoder_layers (int, optional, defaults to 10) —
Number of layers in the Transformer decoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder.
dim_feedforward (int, optional, defaults to 2048) —
Feature dimension in feedforward network for transformer decoder.
pre_norm (bool, optional, defaults to False) —
Whether to use pre-LayerNorm or not for transformer decoder.
enforce_input_projection (bool, optional, defaults to False) —
Whether to add an input projection 1x1 convolution even if the input channels and hidden dim are identical
in the Transformer decoder.
common_stride (int, optional, defaults to 4) —
Parameter used for determining number of FPN levels used as part of pixel decoder.
ignore_value (int, optional, defaults to 255) —
Category id to be ignored during training.
num_queries (int, optional, defaults to 100) —
Number of queries for the decoder.
no_object_weight (int, optional, defaults to 0.1) —
The weight to apply to the null (no object) class.
class_weight (int, optional, defaults to 2.0) —
The weight for the cross entropy loss.
mask_weight (int, optional, defaults to 5.0) —
The weight for the mask loss.
dice_weight (int, optional, defaults to 5.0) —
The weight for the dice loss.
train_num_points (str or function, optional, defaults to 12544) —
Number of points used for sampling during loss calculation.
oversample_ratio (float, optional, defaults to 3.0) —
Oversampling parameter used for calculating no. of sampled points
importance_sample_ratio (float, optional, defaults to 0.75) —
Ratio of points that are sampled via importance sampling.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (`float“, optional, defaults to 1.0) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
use_auxiliary_loss (boolean``, *optional*, defaults to True) -- If True Mask2FormerForUniversalSegmentationOutput` will contain the auxiliary losses computed using
the logits from each decoder’s stage.
feature_strides (List[int], optional, defaults to [4, 8, 16, 32]) —
Feature strides corresponding to features generated from backbone network.
output_auxiliary_logits (bool, optional) —
Should the model output its auxiliary_logits or not.
This is the configuration class to store the configuration of a Mask2FormerModel. It is used to instantiate a
Mask2Former model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Mask2Former
facebook/mask2former-swin-small-coco-instance
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Currently, Mask2Former only supports the Swin Transformer as backbone.
Examples:
Copied
from transformers import Mask2FormerConfig, Mask2FormerModel
# Initializing a Mask2Former facebook/mask2former-swin-small-coco-instance configuration
configuration = Mask2FormerConfig()
# Initializing a model (with random weights) from the facebook/mask2former-swin-small-coco-instance style configuration
model = Mask2FormerModel(configuration)
# Accessing the model configuration
configuration = model.config
from_backbone_config
<
source
>
(
backbone_config: PretrainedConfig
**kwargs
)
→
Mask2FormerConfig
Parameters
backbone_config (PretrainedConfig) —
The backbone configuration.
Returns
Mask2FormerConfig
An instance of a configuration object
Instantiate a Mask2FormerConfig (or a derived class) from a pre-trained backbone model configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
Mask2FormerModel
class transformers.Mask2FormerModel
<
source
>
(
config: Mask2FormerConfig
)
Parameters
config (Mask2FormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Mask2Former Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: Tensor
pixel_mask: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.mask2former.modeling_mask2former.Mask2FormerModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.preprocess for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers.
return_dict (bool, optional) —
Whether or not to return a ~Mask2FormerModelOutput instead of a plain tuple.
Returns
transformers.models.mask2former.modeling_mask2former.Mask2FormerModelOutput or tuple(torch.FloatTensor)
A transformers.models.mask2former.modeling_mask2former.Mask2FormerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Mask2FormerConfig) and inputs.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width), optional) — Last hidden states (final feature map) of the last stage of the encoder model (backbone). Returned when
output_hidden_states=True is passed.
encoder_hidden_states (tuple(torch.FloatTensor), optional) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage. Returned when output_hidden_states=True is passed.
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width), optional) — Last hidden states (final feature map) of the last stage of the pixel decoder model.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), , optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage. Returned when output_hidden_states=True is passed.
transformer_decoder_last_hidden_state (tuple(torch.FloatTensor)) — Final output of the transformer decoder (batch_size, sequence_length, hidden_size).
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage. Returned when output_hidden_states=True is passed.
transformer_decoder_intermediate_states (tuple(torch.FloatTensor) of shape (num_queries, 1, hidden_size)) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
masks_queries_logits (tuple(torch.FloatTensor) of shape (batch_size, num_queries, height, width))
Mask Predictions from each layer in the transformer decoder.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed) — Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self attentions weights from transformer decoder.
Mask2FormerModelOutput
The Mask2FormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import torch
from PIL import Image
import requests
from transformers import AutoImageProcessor, Mask2FormerModel
# load image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# load image preprocessor and Mask2FormerModel trained on COCO instance segmentation dataset
image_processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-coco-instance")
model = Mask2FormerModel.from_pretrained("facebook/mask2former-swin-small-coco-instance")
inputs = image_processor(image, return_tensors="pt")
# forward pass
with torch.no_grad():
... outputs = model(**inputs)
# model outputs last hidden states of shape (batch_size, num_queries, hidden_size)
print(outputs.transformer_decoder_last_hidden_state.shape)
torch.Size([1, 100, 256])
Mask2FormerForUniversalSegmentation
class transformers.Mask2FormerForUniversalSegmentation
<
source
>
(
config: Mask2FormerConfig
)
Parameters
config (Mask2FormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Mask2Former Model with heads on top for instance/semantic/panoptic segmentation.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: Tensor
mask_labels: typing.Optional[typing.List[torch.Tensor]] = None
class_labels: typing.Optional[typing.List[torch.Tensor]] = None
pixel_mask: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_auxiliary_logits: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.preprocess for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers.
return_dict (bool, optional) —
Whether or not to return a ~Mask2FormerModelOutput instead of a plain tuple.
mask_labels (List[torch.Tensor], optional) —
List of mask labels of shape (num_labels, height, width) to be fed to a model
class_labels (List[torch.LongTensor], optional) —
list of target class labels of shape (num_labels, height, width) to be fed to a model. They identify the
labels of mask_labels, e.g. the label of mask_labels[i][j] if class_labels[i][j].
Returns
transformers.models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Mask2FormerConfig) and inputs.
loss (torch.Tensor, optional) — The computed loss, returned when labels are present.
class_queries_logits (torch.FloatTensor) — A tensor of shape (batch_size, num_queries, num_labels + 1) representing the proposed classes for each
query. Note the + 1 is needed because we incorporate the null class.
masks_queries_logits (torch.FloatTensor) — A tensor of shape (batch_size, num_queries, height, width) representing the proposed masks for each
query.
auxiliary_logits (List[Dict(str, torch.FloatTensor)], optional) — List of class and mask predictions from each layer of the transformer decoder.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Last hidden states (final feature map) of the last stage of the encoder model (backbone).
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Last hidden states (final feature map) of the last stage of the pixel decoder model.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_last_hidden_state (tuple(torch.FloatTensor)) — Final output of the transformer decoder (batch_size, sequence_length, hidden_size).
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self and Cross Attentions weights from transformer decoder.
Mask2FormerUniversalSegmentationOutput
The Mask2FormerForUniversalSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Instance segmentation example:
Copied
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
from PIL import Image
import requests
import torch
# Load Mask2Former trained on COCO instance segmentation dataset
image_processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-coco-instance")
model = Mask2FormerForUniversalSegmentation.from_pretrained(
... "facebook/mask2former-swin-small-coco-instance"
... )
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# Model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# Perform post-processing to get instance segmentation map
pred_instance_map = image_processor.post_process_semantic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]
print(pred_instance_map.shape)
torch.Size([480, 640])
Semantic segmentation example:
Copied
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
from PIL import Image
import requests
import torch
# Load Mask2Former trained on ADE20k semantic segmentation dataset
image_processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-small-ade-semantic")
url = (
... "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
... )
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# Model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# Perform post-processing to get semantic segmentation map
pred_semantic_map = image_processor.post_process_semantic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]
print(pred_semantic_map.shape)
torch.Size([512, 683])
Panoptic segmentation example:
Copied
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
from PIL import Image
import requests
import torch
# Load Mask2Former trained on CityScapes panoptic segmentation dataset
image_processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-small-cityscapes-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained(
... "facebook/mask2former-swin-small-cityscapes-panoptic"
... )
url = "https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# Model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# Perform post-processing to get panoptic segmentation map
pred_panoptic_map = image_processor.post_process_panoptic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]["segmentation"]
print(pred_panoptic_map.shape)
torch.Size([338, 676])
Mask2FormerImageProcessor
class transformers.Mask2FormerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
size_divisor: int = 32
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: float = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
ignore_index: typing.Optional[int] = None
reduce_labels: bool = False
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the input to a certain size.
size (int, optional, defaults to 800) —
Resize the input to the given size. Only has an effect if do_resize is set to True. If size is a
sequence like (width, height), output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).
max_size (int, optional, defaults to 1333) —
The largest size an image dimension can have (otherwise it’s capped). Only has an effect if do_resize is
set to True.
resample (int, optional, defaults to PIL.Image.Resampling.BILINEAR) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST,
PIL.Image.Resampling.BOX, PIL.Image.Resampling.BILINEAR, PIL.Image.Resampling.HAMMING,
PIL.Image.Resampling.BICUBIC or PIL.Image.Resampling.LANCZOS. Only has an effect if do_resize is set
to True.
size_divisor (int, optional, defaults to 32) —
Some backbones need images divisible by a certain number. If not passed, it defaults to the value used in
Swin Transformer.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the input to a certain scale.
rescale_factor (float, optional, defaults to 1/ 255) —
Rescale the input by the given factor. Only has an effect if do_rescale is set to True.
do_normalize (bool, optional, defaults to True) —
Whether or not to normalize the input with mean and standard deviation.
image_mean (int, optional, defaults to [0.485, 0.456, 0.406]) —
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean.
image_std (int, optional, defaults to [0.229, 0.224, 0.225]) —
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std.
ignore_index (int, optional) —
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
denoted with 0 (background) will be replaced with ignore_index.
reduce_labels (bool, optional, defaults to False) —
Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k).
The background label will be replaced by ignore_index.
Constructs a Mask2Former image processor. The image processor can be used to prepare image(s) and optional targets
for the model.
This image processor inherits from BaseImageProcessor which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None
instance_id_to_semantic_id: typing.Union[typing.Dict[int, int], NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
size_divisor: typing.Optional[int] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
ignore_index: typing.Optional[int] = None
reduce_labels: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
encode_inputs
<
source
>
(
pixel_values_list: typing.List[typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] = None
instance_id_to_semantic_id: typing.Union[typing.List[typing.Dict[int, int]], typing.Dict[int, int], NoneType] = None
ignore_index: typing.Optional[int] = None
reduce_labels: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
)
→
BatchFeature
Parameters
pixel_values_list (List[ImageInput]) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (channels, height, width).
segmentation_maps (ImageInput, optional) —
The corresponding semantic segmentation maps with the pixel-wise annotations.
(bool, optional, defaults to True):
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
instance_id_to_semantic_id (List[Dict[int, int]] or Dict[int, int], optional) —
A mapping between object instance ids and class ids. If passed, segmentation_maps is treated as an
instance segmentation map where each pixel represents an instance id. Can be provided as a single
dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map
instance ids in each image separately.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt', return PyTorch torch.Tensor
objects.
Returns
BatchFeature
A BatchFeature with the following fields:
pixel_values — Pixel values to be fed to a model.
pixel_mask — Pixel mask to be fed to a model (when =True or if pixel_mask is in
self.model_input_names).
mask_labels — Optional list of mask labels of shape (labels, height, width) to be fed to a model
(when annotations are provided).
class_labels — Optional list of class labels of shape (labels) to be fed to a model (when
annotations are provided). They identify the labels of mask_labels, e.g. the label of
mask_labels[i][j] if class_labels[i][j].
Pad images up to the largest image in a batch and create a corresponding pixel_mask.
Mask2Former addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps
will be converted to lists of binary masks and their respective labels. Let’s see an example, assuming
segmentation_maps = [[2,6,7,9]], the output will contain mask_labels = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]] (four binary masks) and class_labels = [2,6,7,9], the labels for
each mask.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[torch.Tensor]
Parameters
outputs (Mask2FormerForUniversalSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of Mask2FormerForUniversalSegmentation into semantic segmentation maps. Only supports
PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
return_binary_maps: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (Mask2FormerForUniversalSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
return_coco_annotation (bool, optional, defaults to False) —
If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format.
return_binary_maps (bool, optional, defaults to False) —
If set to True, segmentation maps are returned as a concatenated tensor of binary segmentation maps
(one per detected instance).
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of Mask2FormerForUniversalSegmentationOutput into instance segmentation predictions.
Only supports PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (Mask2FormerForUniversalSegmentationOutput) —
The outputs from Mask2FormerForUniversalSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id, set
to None if no mask if found above threshold. If target_sizes is specified, segmentation is resized
to the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of Mask2FormerForUniversalSegmentationOutput into image panoptic segmentation
predictions. Only supports PyTorch.
←LeViT
MaskFormer→
Mask2Former
Overview
Resources
MaskFormer specific outputs
Mask2FormerConfig
Mask2FormerModel
Mask2FormerForUniversalSegmentation
Mask2FormerImageProcessor
|
RegNet
Overview
The RegNet model was proposed in Designing Network Design Spaces by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.
The abstract from the paper is the following:
In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.
Tips:
One can use AutoImageProcessor to prepare images for the model.
The huge 10B model from Self-supervised Pretraining of Visual Features in the Wild, trained on one billion Instagram images, is available on the hub
This model was contributed by Francesco. The TensorFlow version of the model
was contributed by sayakpaul and ariG23498.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RegNet.
Image Classification
RegNetForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
RegNetConfig
class transformers.RegNetConfig
<
source
>
(
num_channels = 3
embedding_size = 32
hidden_sizes = [128, 192, 512, 1088]
depths = [2, 6, 12, 2]
groups_width = 64
layer_type = 'y'
hidden_act = 'relu'
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
embedding_size (int, optional, defaults to 64) —
Dimensionality (hidden size) for the embedding layer.
hidden_sizes (List[int], optional, defaults to [256, 512, 1024, 2048]) —
Dimensionality (hidden size) at each stage.
depths (List[int], optional, defaults to [3, 4, 6, 3]) —
Depth (number of layers) for each stage.
layer_type (str, optional, defaults to "y") —
The layer to use, it can be either "x" or “y”. An xlayer is a ResNet's BottleNeck layer withreductionfixed to1. While a ylayer is ax` but with squeeze and excitation. Please refer to the
paper for a detailed explanation of how these layers were constructed.
hidden_act (str, optional, defaults to "relu") —
The non-linear activation function in each block. If string, "gelu", "relu", "selu" and "gelu_new"
are supported.
downsample_in_first_stage (bool, optional, defaults to False) —
If True, the first stage will downsample the inputs using a stride of 2.
This is the configuration class to store the configuration of a RegNetModel. It is used to instantiate a RegNet
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the RegNet
facebook/regnet-y-040 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import RegNetConfig, RegNetModel
# Initializing a RegNet regnet-y-40 style configuration
configuration = RegNetConfig()
# Initializing a model from the regnet-y-40 style configuration
model = RegNetModel(configuration)
# Accessing the model configuration
configuration = model.config
RegNetModel
class transformers.RegNetModel
<
source
>
(
config
)
Parameters
config (RegNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RegNet model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: Tensor
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RegNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The RegNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, RegNetModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/regnet-y-040")
model = RegNetModel.from_pretrained("facebook/regnet-y-040")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1088, 7, 7]
RegNetForImageClassification
class transformers.RegNetForImageClassification
<
source
>
(
config
)
Parameters
config (RegNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RegNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RegNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The RegNetForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, RegNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/regnet-y-040")
model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFRegNetModel
class transformers.TFRegNetModel
<
source
>
(
*args
**kwargs
)
Parameters
This model is a Tensorflow —
[tf.keras.layers.Layer](https —//www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) sub-class. Use it as a
regular Tensorflow Module and refer to the Tensorflow documentation for all matter related to general usage and —
behavior. —
config (RegNetConfig): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RegNet model outputting raw features without any specific head on top.
call
<
source
>
(
pixel_values: Tensor
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
training = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndNoAttention or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConveNextImageProcessor.__call__ for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndNoAttention or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndNoAttention or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RegNetConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The TFRegNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFRegNetModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/regnet-y-040")
model = TFRegNetModel.from_pretrained("facebook/regnet-y-040")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1088, 7, 7]
TFRegNetForImageClassification
class transformers.TFRegNetForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
This model is a Tensorflow —
[tf.keras.layers.Layer](https —//www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) sub-class. Use it as a
regular Tensorflow Module and refer to the Tensorflow documentation for all matter related to general usage and —
behavior. —
config (RegNetConfig): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RegNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
call
<
source
>
(
pixel_values: Tensor = None
labels: Tensor = None
output_hidden_states: bool = None
return_dict: bool = None
training = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConveNextImageProcessor.__call__ for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RegNetConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRegNetForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFRegNetForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/regnet-y-040")
model = TFRegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
tabby, tabby cat
FlaxRegNetModel
class transformers.FlaxRegNetModel
<
source
>
(
config: RegNetConfig
input_shape = (1, 224, 224, 3)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RegNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare RegNet model outputting raw features without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
params: dict = None
train: bool = False
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.regnet.configuration_regnet.RegNetConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRegNetPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, FlaxRegNetModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/regnet-y-040")
model = FlaxRegNetModel.from_pretrained("facebook/regnet-y-040")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxRegNetForImageClassification
class transformers.FlaxRegNetForImageClassification
<
source
>
(
config: RegNetConfig
input_shape = (1, 224, 224, 3)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RegNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
RegNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
params: dict = None
train: bool = False
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.regnet.configuration_regnet.RegNetConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when
config.output_hidden_states=True):
Tuple of jnp.ndarray (one for the output of the embeddings, if the model has an embedding layer, + one
for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The FlaxRegNetPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FlaxRegNetForImageClassification
from PIL import Image
import jax
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/regnet-y-040")
model = FlaxRegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = jax.numpy.argmax(logits, axis=-1)
print("Predicted class:", model.config.id2label[predicted_class_idx.item()])
←PoolFormer
ResNet→
RegNet
Overview
Resources
RegNetConfig
RegNetModel
RegNetForImageClassification
TFRegNetModel
TFRegNetForImageClassification
FlaxRegNetModel
FlaxRegNetForImageClassification
|
MobileBERT
Overview
The MobileBERT model was proposed in MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny
Zhou. It’s a bidirectional transformer based on the BERT model, which is compressed and accelerated using several
approaches.
The abstract from the paper is the following:
Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds
of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot
be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating
the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to
various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while
equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE
model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is
4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the
natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms
latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of
90.0/79.2 (1.5/2.1 higher than BERT_BASE).
Tips:
MobileBERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather
than the left.
MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
This model was contributed by vshampor. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
MobileBertConfig
class transformers.MobileBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 512
num_hidden_layers = 24
num_attention_heads = 4
intermediate_size = 512
hidden_act = 'relu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
embedding_size = 128
trigram_input = True
use_bottleneck = True
intra_bottleneck_size = 128
use_bottleneck_attention = False
key_query_shared_bottleneck = True
num_feedforward_networks = 4
normalization_type = 'no_norm'
classifier_activation = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the MobileBERT model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling MobileBertModel or TFMobileBertModel.
hidden_size (int, optional, defaults to 512) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 512) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling MobileBertModel or
TFMobileBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
pad_token_id (int, optional, defaults to 0) —
The ID of the token in the word embedding to use as padding.
embedding_size (int, optional, defaults to 128) —
The dimension of the word embedding vectors.
trigram_input (bool, optional, defaults to True) —
Use a convolution of trigram as input.
use_bottleneck (bool, optional, defaults to True) —
Whether to use bottleneck in BERT.
intra_bottleneck_size (int, optional, defaults to 128) —
Size of bottleneck layer output.
use_bottleneck_attention (bool, optional, defaults to False) —
Whether to use attention inputs from the bottleneck transformation.
key_query_shared_bottleneck (bool, optional, defaults to True) —
Whether to use the same linear transformation for query&key in the bottleneck.
num_feedforward_networks (int, optional, defaults to 4) —
Number of FFNs in a block.
normalization_type (str, optional, defaults to "no_norm") —
The normalization type in MobileBERT.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a MobileBertModel or a TFMobileBertModel. It
is used to instantiate a MobileBERT model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the MobileBERT
google/mobilebert-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import MobileBertConfig, MobileBertModel
# Initializing a MobileBERT configuration
configuration = MobileBertConfig()
# Initializing a model (with random weights) from the configuration above
model = MobileBertModel(configuration)
# Accessing the model configuration
configuration = model.config
Attributes: pretrained_config_archive_map (Dict[str, str]): A dictionary containing all the available pre-trained
checkpoints.
MobileBertTokenizer
class transformers.MobileBertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original MobileBERT).
Construct a MobileBERT tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A MobileBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A MobileBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
MobileBertTokenizerFast
class transformers.MobileBertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original MobileBERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” MobileBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A MobileBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A MobileBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
MobileBert specific outputs
class transformers.models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
prediction_logits: FloatTensor = None
seq_relationship_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of MobileBertForPreTraining.
class transformers.models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput
<
source
>
(
loss: tf.Tensor | None = None
prediction_logits: tf.Tensor = None
seq_relationship_logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (tf.Tensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFMobileBertForPreTraining.
MobileBertModel
class transformers.MobileBertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
https://arxiv.org/pdf/2004.02984.pdf
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MobileBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = MobileBertModel.from_pretrained("google/mobilebert-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MobileBertForPreTraining
class transformers.MobileBertForPreTraining
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with two heads on top as done during the pretraining: a masked language modeling head and a
next sentence prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
next_sentence_label: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[torch.FloatTensor] = None
output_hidden_states: typing.Optional[torch.FloatTensor] = None
return_dict: typing.Optional[torch.FloatTensor] = None
)
→
transformers.models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, MobileBertForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = MobileBertForPreTraining.from_pretrained("google/mobilebert-uncased")
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
# Batch size 1
outputs = model(input_ids)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
MobileBertForMaskedLM
class transformers.MobileBertForMaskedLM
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MobileBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = MobileBertForMaskedLM.from_pretrained("google/mobilebert-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.57
MobileBertForNextSentencePrediction
class transformers.MobileBertForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1].
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, MobileBertForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = MobileBertForNextSentencePrediction.from_pretrained("google/mobilebert-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
loss = outputs.loss
logits = outputs.logits
MobileBertForSequenceClassification
class transformers.MobileBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, MobileBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("lordtt13/emo-mobilebert")
model = MobileBertForSequenceClassification.from_pretrained("lordtt13/emo-mobilebert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'others'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MobileBertForSequenceClassification.from_pretrained("lordtt13/emo-mobilebert", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
4.72
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, MobileBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("lordtt13/emo-mobilebert")
model = MobileBertForSequenceClassification.from_pretrained("lordtt13/emo-mobilebert", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MobileBertForSequenceClassification.from_pretrained(
... "lordtt13/emo-mobilebert", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
MobileBertForMultipleChoice
class transformers.MobileBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MobileBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = MobileBertForMultipleChoice.from_pretrained("google/mobilebert-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
MobileBertForTokenClassification
class transformers.MobileBertForTokenClassification
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MobileBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("mrm8488/mobilebert-finetuned-ner")
model = MobileBertForTokenClassification.from_pretrained("mrm8488/mobilebert-finetuned-ner")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.03
MobileBertForQuestionAnswering
class transformers.MobileBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MobileBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("csarron/mobilebert-uncased-squad-v2")
model = MobileBertForQuestionAnswering.from_pretrained("csarron/mobilebert-uncased-squad-v2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'a nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([12])
target_end_index = torch.tensor([13])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
3.98
TFMobileBertModel
class transformers.TFMobileBertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMobileBertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = TFMobileBertModel.from_pretrained("google/mobilebert-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFMobileBertForPreTraining
class transformers.TFMobileBertForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with two heads on top as done during the pretraining: a masked language modeling head and a
next sentence prediction (classification) head.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
next_sentence_label: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFMobileBertForPreTraining
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = TFMobileBertForPreTraining.from_pretrained("google/mobilebert-uncased")
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
outputs = model(input_ids)
prediction_scores, seq_relationship_scores = outputs[:2]
TFMobileBertForMaskedLM
class transformers.TFMobileBertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMobileBertForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = TFMobileBertForMaskedLM.from_pretrained("google/mobilebert-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.57
TFMobileBertForNextSentencePrediction
class transformers.TFMobileBertForNextSentencePrediction
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a next sentence prediction (classification) head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
next_sentence_label: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when next_sentence_label is provided) — Next sentence prediction loss.
logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFMobileBertForNextSentencePrediction
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = TFMobileBertForNextSentencePrediction.from_pretrained("google/mobilebert-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="tf")
logits = model(encoding["input_ids"], token_type_ids=encoding["token_type_ids"])[0]
TFMobileBertForSequenceClassification
class transformers.TFMobileBertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMobileBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("vumichien/emo-mobilebert")
model = TFMobileBertForSequenceClassification.from_pretrained("vumichien/emo-mobilebert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'others'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFMobileBertForSequenceClassification.from_pretrained("vumichien/emo-mobilebert", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
4.72
TFMobileBertForMultipleChoice
class transformers.TFMobileBertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMobileBertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased")
model = TFMobileBertForMultipleChoice.from_pretrained("google/mobilebert-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFMobileBertForTokenClassification
class transformers.TFMobileBertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMobileBertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("vumichien/mobilebert-finetuned-ner")
model = TFMobileBertForTokenClassification.from_pretrained("vumichien/mobilebert-finetuned-ner")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_tokens_classes
['I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
round(float(loss), 2)
0.03
TFMobileBertForQuestionAnswering
class transformers.TFMobileBertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMobileBertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("vumichien/mobilebert-uncased-squad-v2")
model = TFMobileBertForQuestionAnswering.from_pretrained("vumichien/mobilebert-uncased-squad-v2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
'a nice puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([12])
target_end_index = tf.constant([13])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
3.98
←mLUKE
MPNet→
MobileBERT
Overview
Documentation resources
MobileBertConfig
MobileBertTokenizer
MobileBertTokenizerFast
MobileBert specific outputs
MobileBertModel
MobileBertForPreTraining
MobileBertForMaskedLM
MobileBertForNextSentencePrediction
MobileBertForSequenceClassification
MobileBertForMultipleChoice
MobileBertForTokenClassification
MobileBertForQuestionAnswering
TFMobileBertModel
TFMobileBertForPreTraining
TFMobileBertForMaskedLM
TFMobileBertForNextSentencePrediction
TFMobileBertForSequenceClassification
TFMobileBertForMultipleChoice
TFMobileBertForTokenClassification
TFMobileBertForQuestionAnswering
|
X-MOD
Overview
The X-MOD model was proposed in Lifting the Curse of Multilinguality by Pre-training Modular Transformers by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe.
X-MOD extends multilingual masked language models like XLM-R to include language-specific modular components (language adapters) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.
The abstract from the paper is the following:
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
Tips:
X-MOD is similar to XLM-R, but a difference is that the input language needs to be specified so that the correct language adapter can be activated.
The main models – base and large – have adapters for 81 languages.
This model was contributed by jvamvas.
The original code can be found here and the original documentation is found here.
Adapter Usage
Input language
There are two ways to specify the input language:
By setting a default language before using the model:
Copied
from transformers import XmodModel
model = XmodModel.from_pretrained("facebook/xmod-base")
model.set_default_language("en_XX")
By explicitly passing the index of the language adapter for each sample:
Copied
import torch
input_ids = torch.tensor(
[
[0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2],
[0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2],
]
)
lang_ids = torch.LongTensor(
[
0, # en_XX
8, # de_DE
]
)
output = model(input_ids, lang_ids=lang_ids)
Fine-tuning
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided:
Copied
model.freeze_embeddings_and_language_adapters()
# Fine-tune the model ...
Cross-lingual transfer
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language:
Copied
model.set_default_language("de_DE")
# Evaluate the model on German examples ...
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XmodConfig
class transformers.XmodConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
pre_norm = False
adapter_reduction_factor = 2
adapter_layer_norm = False
adapter_reuse_layer_norm = True
ln_before_adapter = True
languages = ('en_XX',)
default_language = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the X-MOD model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling XmodModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling XmodModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
pre_norm (bool, optional, defaults to False) —
Whether to apply layer normalization before each block.
adapter_reduction_factor (int or float, optional, defaults to 2) —
The factor by which the dimensionality of the adapter is reduced relative to hidden_size.
adapter_layer_norm (bool, optional, defaults to False) —
Whether to apply a new layer normalization before the adapter modules (shared across all adapters).
adapter_reuse_layer_norm (bool, optional, defaults to True) —
Whether to reuse the second layer normalization and apply it before the adapter modules as well.
ln_before_adapter (bool, optional, defaults to True) —
Whether to apply the layer normalization before the residual connection around the adapter module.
languages (Iterable[str], optional, defaults to ["en_XX"]) —
An iterable of language codes for which adapter modules should be initialized.
default_language (str, optional) —
Language code of a default language. It will be assumed that the input is in this language if no language
codes are explicitly passed to the forward method.
This is the configuration class to store the configuration of a XmodModel. It is used to instantiate an X-MOD
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the
facebook/xmod-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import XmodConfig, XmodModel
# Initializing an X-MOD facebook/xmod-base style configuration
configuration = XmodConfig()
# Initializing a model (with random weights) from the facebook/xmod-base style configuration
model = XmodModel(configuration)
# Accessing the model configuration
configuration = model.config
XmodModel
class transformers.XmodModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare X-MOD Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors —
of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
The XmodModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
XmodForCausalLM
class transformers.XmodForCausalLM
<
source
>
(
config
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
X-MOD Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns — transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Example —
The XmodForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
XmodForMaskedLM
class transformers.XmodForMaskedLM
<
source
>
(
config
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
X-MOD Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
The XmodForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
XmodForSequenceClassification
class transformers.XmodForSequenceClassification
<
source
>
(
config
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
X-MOD Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The XmodForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
XmodForMultipleChoice
class transformers.XmodForMultipleChoice
<
source
>
(
config
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
X-MOD Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
The XmodForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
XmodForTokenClassification
class transformers.XmodForTokenClassification
<
source
>
(
config
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
X-MOD Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
The XmodForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
XmodForQuestionAnswering
class transformers.XmodForQuestionAnswering
<
source
>
(
config
)
Parameters
config (XmodConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
X-MOD Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
lang_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
lang_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of the language adapters that should be activated for each sample, respectively. Default: the index
that corresponds to self.config.default_language.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
The XmodForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←UMT5
XGLM→
X-MOD
Overview
Adapter Usage
Input language
Fine-tuning
Cross-lingual transfer
Resources
XmodConfig
XmodModel
XmodForCausalLM
XmodForMaskedLM
XmodForSequenceClassification
XmodForMultipleChoice
XmodForTokenClassification
XmodForQuestionAnswering
|
MGP-STR
Overview
The MGP-STR model was proposed in Multi-Granularity Prediction for Scene Text Recognition by Peng Wang, Cheng Da, and Cong Yao. MGP-STR is a conceptually simple yet powerful vision Scene Text Recognition (STR) model, which is built upon the Vision Transformer (ViT). To integrate linguistic knowledge, Multi-Granularity Prediction (MGP) strategy is proposed to inject information from the language modality into the model in an implicit way.
The abstract from the paper is the following:
Scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this challenging problem, numerous innovative methods have been successively proposed and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. The resultant algorithm (termed MGP-STR) is able to push the performance envelop of STR to an even higher level. Specifically, it achieves an average recognition accuracy of 93.35% on standard benchmarks.
MGP-STR architecture. Taken from the original paper.
Tips:
MGP-STR is trained on two synthetic datasets MJSynth (MJ) and SynthText(http://www.robots.ox.ac.uk/~vgg/data/scenetext/) (ST) without fine-tuning on other datasets. It achieves state-of-the-art results on six standard Latin scene text benchmarks, including 3 regular text datasets (IC13, SVT, IIIT) and 3 irregular ones (IC15, SVTP, CUTE).
This model was contributed by yuekun. The original code can be found here.
Inference
MgpstrModel accepts images as input and generates three types of predictions, which represent textual information at different granularities.
The three types of predictions are fused to give the final prediction result.
The ViTImageProcessor class is responsible for preprocessing the input image and
MgpstrTokenizer decodes the generated character tokens to the target string. The
MgpstrProcessor wraps ViTImageProcessor and MgpstrTokenizer
into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step Optical Character Recognition (OCR)
Copied
from transformers import MgpstrProcessor, MgpstrForSceneTextRecognition
import requests
from PIL import Image
processor = MgpstrProcessor.from_pretrained('alibaba-damo/mgp-str-base')
model = MgpstrForSceneTextRecognition.from_pretrained('alibaba-damo/mgp-str-base')
# load image from the IIIT-5k dataset
url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
pixel_values = processor(images=image, return_tensors="pt").pixel_values
outputs = model(pixel_values)
generated_text = processor.batch_decode(outputs.logits)['generated_text']
MgpstrConfig
class transformers.MgpstrConfig
<
source
>
(
image_size = [32, 128]
patch_size = 4
num_channels = 3
max_token_length = 27
num_character_labels = 38
num_bpe_labels = 50257
num_wordpiece_labels = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
mlp_ratio = 4.0
qkv_bias = True
distilled = False
layer_norm_eps = 1e-05
drop_rate = 0.0
attn_drop_rate = 0.0
drop_path_rate = 0.0
output_a3_attentions = False
initializer_range = 0.02
**kwargs
)
Parameters
image_size (List[int], optional, defaults to [32, 128]) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
max_token_length (int, optional, defaults to 27) —
The max number of output tokens.
num_character_labels (int, optional, defaults to 38) —
The number of classes for character head .
num_bpe_labels (int, optional, defaults to 50257) —
The number of classes for bpe head .
num_wordpiece_labels (int, optional, defaults to 30522) —
The number of classes for wordpiece head .
hidden_size (int, optional, defaults to 768) —
The embedding dimension.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
mlp_ratio (float, optional, defaults to 4.0) —
The ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
distilled (bool, optional, defaults to False) —
Model includes a distillation token and head as in DeiT models.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
drop_rate (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder.
attn_drop_rate (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.0) —
The stochastic depth rate.
output_a3_attentions (bool, optional, defaults to False) —
Whether or not the model should returns A^3 module attentions.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
This is the configuration class to store the configuration of an MgpstrModel. It is used to instantiate an
MGP-STR model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the MGP-STR
alibaba-damo/mgp-str-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MgpstrConfig, MgpstrForSceneTextRecognition
# Initializing a Mgpstr mgp-str-base style configuration
configuration = MgpstrConfig()
# Initializing a model (with random weights) from the mgp-str-base style configuration
model = MgpstrForSceneTextRecognition(configuration)
# Accessing the model configuration
configuration = model.config
MgpstrTokenizer
class transformers.MgpstrTokenizer
<
source
>
(
vocab_file
unk_token = '[GO]'
bos_token = '[GO]'
eos_token = '[s]'
pad_token = '[GO]'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
unk_token (str, optional, defaults to "[GO]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to "[GO]") —
The beginning of sequence token.
eos_token (str, optional, defaults to "[s]") —
The end of sequence token.
pad_token (str or tokenizers.AddedToken, optional, , defaults to "[GO]") —
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation.
Construct a MGP-STR char tokenizer.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
MgpstrProcessor
class transformers.MgpstrProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (ViTImageProcessor) —
An instance of ViTImageProcessor. The image processor is a required input.
tokenizer (MgpstrTokenizer) —
The tokenizer is a required input.
Constructs a MGP-STR processor which wraps an image processor and MGP-STR tokenizers into a single
MgpstrProcessor offers all the functionalities of ViTImageProcessor] and MgpstrTokenizer. See the
call() and batch_decode() for more information.
__call__
<
source
>
(
text = None
images = None
return_tensors = None
**kwargs
)
When used in normal mode, this method forwards all its arguments to ViTImageProcessor’s
call() and returns its output. This method also forwards the text and kwargs
arguments to MgpstrTokenizer’s call() if text is not None to encode the text. Please
refer to the doctsring of the above methods for more information.
batch_decode
<
source
>
(
sequences
)
→
Dict[str, any]
Parameters
sequences (torch.Tensor) —
List of tokenized input ids.
Returns
Dict[str, any]
Dictionary of all the outputs of the decoded results.
generated_text (List[str]): The final results after fusion of char, bpe, and wp. scores
(List[float]): The final scores after fusion of char, bpe, and wp. char_preds (List[str]): The list
of character decoded sentences. bpe_preds (List[str]): The list of bpe decoded sentences. wp_preds
(List[str]): The list of wp decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
MgpstrModel
class transformers.MgpstrModel
<
source
>
(
config: MgpstrConfig
)
Parameters
config (MgpstrConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MGP-STR Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
output_attentions = None
output_hidden_states = None
return_dict = None
)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The MgpstrModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
MgpstrForSceneTextRecognition
class transformers.MgpstrForSceneTextRecognition
<
source
>
(
config: MgpstrConfig
)
Parameters
config (MgpstrConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MGP-STR Model transformer with three classification heads on top (three A^3 modules and three linear layer on top
of the transformer encoder output) for scene text recognition (STR) .
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
output_attentions = None
output_a3_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.mgp_str.modeling_mgp_str.MgpstrModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
output_a3_attentions (bool, optional) —
Whether or not to return the attentions tensors of a3 modules. See a3_attentions under returned tensors
for more detail.
Returns
transformers.models.mgp_str.modeling_mgp_str.MgpstrModelOutput or tuple(torch.FloatTensor)
A transformers.models.mgp_str.modeling_mgp_str.MgpstrModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mgp_str.configuration_mgp_str.MgpstrConfig'>) and inputs.
logits (tuple(torch.FloatTensor) of shape (batch_size, config.num_character_labels)) — Tuple of torch.FloatTensor (one for the output of character of shape (batch_size, config.max_token_length, config.num_character_labels), + one for the output of bpe of shape (batch_size, config.max_token_length, config.num_bpe_labels), + one for the output of wordpiece of shape (batch_size, config.max_token_length, config.num_wordpiece_labels)) .
Classification scores (before SoftMax) of character, bpe and wordpiece.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, config.max_token_length, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
a3_attentions (tuple(torch.FloatTensor), optional, returned when output_a3_attentions=True is passed or when config.output_a3_attentions=True) — Tuple of torch.FloatTensor (one for the attention of character, + one for the attention of bpe, + one for the attention of wordpiece) of shape (batch_size, config.max_token_length, sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MgpstrForSceneTextRecognition forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import (
... MgpstrProcessor,
... MgpstrForSceneTextRecognition,
... )
import requests
from PIL import Image
# load image from the IIIT-5k dataset
url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = MgpstrProcessor.from_pretrained("alibaba-damo/mgp-str-base")
pixel_values = processor(images=image, return_tensors="pt").pixel_values
model = MgpstrForSceneTextRecognition.from_pretrained("alibaba-damo/mgp-str-base")
# inference
outputs = model(pixel_values)
out_strs = processor.batch_decode(outputs.logits)
out_strs["generated_text"]
'["ticket"]'
←MatCha
OneFormer→
MGP-STR
Overview
Inference
MgpstrConfig
MgpstrTokenizer
MgpstrProcessor
MgpstrModel
MgpstrForSceneTextRecognition
|
Nezha
Overview
The Nezha model was proposed in NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei et al.
The abstract from the paper is the following:
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks
due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed
representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks.
The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional
Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy,
Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA
achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including
named entity recognition (People’s Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti)
and natural language inference (XNLI).
This model was contributed by sijunhe. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
NezhaConfig
class transformers.NezhaConfig
<
source
>
(
vocab_size = 21128
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
max_relative_position = 64
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
classifier_dropout = 0.1
pad_token_id = 0
bos_token_id = 2
eos_token_id = 3
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 21128) —
Vocabulary size of the NEZHA model. Defines the different tokens that can be represented by the
inputs_ids passed to the forward method of NezhaModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
The dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to “gelu”) —
The non-linear activation function (function or string) in the encoder and pooler.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
(e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed into NezhaModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
classifier_dropout (float, optional, defaults to 0.1) —
The dropout ratio for attached classifiers.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
This is the configuration class to store the configuration of an NezhaModel. It is used to instantiate an Nezha
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Nezha
sijunhe/nezha-cn-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import NezhaConfig, NezhaModel
# Initializing an Nezha configuration
configuration = NezhaConfig()
# Initializing a model (with random weights) from the Nezha-base style configuration model
model = NezhaModel(configuration)
# Accessing the model configuration
configuration = model.config
NezhaModel
class transformers.NezhaModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Nezha Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The NezhaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaModel
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaModel.from_pretrained("sijunhe/nezha-cn-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
NezhaForPreTraining
class transformers.NezhaForPreTraining
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
next_sentence_label: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.nezha.modeling_nezha.NezhaForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional):
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence
pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}):
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.nezha.modeling_nezha.NezhaForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.nezha.modeling_nezha.NezhaForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForPreTraining.from_pretrained("sijunhe/nezha-cn-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
NezhaForMaskedLM
class transformers.NezhaForMaskedLM
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForMaskedLM.from_pretrained("sijunhe/nezha-cn-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
NezhaForNextSentencePrediction
class transformers.NezhaForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForNextSentencePrediction.from_pretrained("sijunhe/nezha-cn-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
NezhaForSequenceClassification
class transformers.NezhaForSequenceClassification
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, NezhaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForSequenceClassification.from_pretrained("sijunhe/nezha-cn-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = NezhaForSequenceClassification.from_pretrained("sijunhe/nezha-cn-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, NezhaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForSequenceClassification.from_pretrained("sijunhe/nezha-cn-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = NezhaForSequenceClassification.from_pretrained(
... "sijunhe/nezha-cn-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
NezhaForMultipleChoice
class transformers.NezhaForMultipleChoice
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForMultipleChoice.from_pretrained("sijunhe/nezha-cn-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
NezhaForTokenClassification
class transformers.NezhaForTokenClassification
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForTokenClassification.from_pretrained("sijunhe/nezha-cn-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
NezhaForQuestionAnswering
class transformers.NezhaForQuestionAnswering
<
source
>
(
config
)
Parameters
config (NezhaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nezha Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NezhaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NezhaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NezhaForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("sijunhe/nezha-cn-base")
model = NezhaForQuestionAnswering.from_pretrained("sijunhe/nezha-cn-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←MVP
NLLB→
Nezha
Overview
Documentation resources
NezhaConfig
NezhaModel
NezhaForPreTraining
NezhaForMaskedLM
NezhaForNextSentencePrediction
NezhaForSequenceClassification
NezhaForMultipleChoice
NezhaForTokenClassification
NezhaForQuestionAnswering
|
Video Vision Transformer (ViViT)
Overview
The Vivit model was proposed in ViViT: A Video Vision Transformer by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid.
The paper proposes one of the first successful pure-transformer based set of models for video understanding.
The abstract from the paper is the following:
We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks.
This model was contributed by jegormeister. The original code (written in JAX) can be found here.
VivitConfig
class transformers.VivitConfig
<
source
>
(
image_size = 224
num_frames = 32
tubelet_size = [2, 16, 16]
num_channels = 3
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu_fast'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-06
qkv_bias = True
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
num_frames (int, optional, defaults to 32) —
The number of frames in each video.
tubelet_size (List[int], optional, defaults to [2, 16, 16]) —
The size (resolution) of each tubelet.
num_channels (int, optional, defaults to 3) —
The number of input channels.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu_fast") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu", "gelu_fast" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-06) —
The epsilon used by the layer normalization layers.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
This is the configuration class to store the configuration of a VivitModel. It is used to instantiate a ViViT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the ViViT
google/vivit-b-16x2-kinetics400 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import VivitConfig, VivitModel
# Initializing a ViViT google/vivit-b-16x2-kinetics400 style configuration
configuration = VivitConfig()
# Initializing a model (with random weights) from the google/vivit-b-16x2-kinetics400 style configuration
model = VivitModel(configuration)
# Accessing the model configuration
configuration = model.config
VivitImageProcessor
class transformers.VivitImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
offset: bool = True
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 256}):
Size of the output image after resizing. The shortest edge of the image will be resized to
size["shortest_edge"] while maintaining the aspect ratio of the original image. Can be overriden by
size in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by the do_center_crop
parameter in the preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after applying the center crop. Can be overridden by the crop_size parameter in the
preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Defines the scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter
in the preprocess method.
offset (bool, optional, defaults to True) —
Whether to scale the image in both negative and positive directions. Can be overriden by the offset in
the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a Vivit image processor.
preprocess
<
source
>
(
videos: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
offset: bool = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
videos (ImageInput) —
Video frames to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after applying resize.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_centre_crop) —
Whether to centre crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after applying the centre crop.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [-1 - 1] if offset is True, [0, 1] otherwise.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
offset (bool, optional, defaults to self.offset) —
Whether to scale the image in both negative and positive directions.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the inferred channel dimension format of the input image.
Preprocess an image or batch of images.
VivitModel
class transformers.VivitModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (VivitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViViT Transformer model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values = None
head_mask = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using VivitImageProcessor. See
VivitImageProcessor.preprocess() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VivitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VivitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import numpy as np
from transformers import VivitImageProcessor, VivitModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 32 frames
indices = sample_frame_indices(clip_len=32, frame_sample_rate=1, seg_len=len(videoreader))
video = videoreader.get_batch(indices).asnumpy()
image_processor = VivitImageProcessor.from_pretrained("google/vivit-b-16x2-kinetics400")
model = VivitModel.from_pretrained("google/vivit-b-16x2-kinetics400")
# prepare video for the model
inputs = image_processor(list(video), return_tensors="pt")
# forward pass
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 3137, 768]
VivitForVideoClassification
class transformers.VivitForVideoClassification
<
source
>
(
config
)
Parameters
config (VivitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViViT Transformer model with a video classification head on top (a linear layer on top of the final hidden state of the
[CLS] token) e.g. for Kinetics-400.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values = None
head_mask = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using VivitImageProcessor. See
VivitImageProcessor.preprocess() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VivitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VivitForVideoClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import numpy as np
from transformers import VivitImageProcessor, VivitModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 32 frames
indices = sample_frame_indices(clip_len=32, frame_sample_rate=1, seg_len=len(videoreader))
video = videoreader.get_batch(indices).asnumpy()
image_processor = VivitImageProcessor.from_pretrained("google/vivit-b-16x2-kinetics400")
model = VivitForVideoClassification.from_pretrained("google/vivit-b-16x2-kinetics400")
inputs = image_processor(list(video), return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
... logits = outputs.logits
# model predicts one of the 400 Kinetics-400 classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
eating spaghetti
←ViTMSN
YOLOS→
Video Vision Transformer (ViViT)
Overview
VivitConfig
VivitImageProcessor
VivitModel
VivitForVideoClassification
|
T5v1.1
Overview
T5v1.1 was released in the google-research/text-to-text-transfer-transformer
repository by Colin Raffel et al. It’s an improved version of the original T5 model.
One can directly plug in the weights of T5v1.1 into a T5 model, like so:
Copied
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base")
T5 Version 1.1 includes the following improvements compared to the original T5 model:
GEGLU activation in the feed-forward hidden layer, rather than ReLU. See this paper.
Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks.
No parameter sharing between the embedding and classifier layer.
“xl” and “xxl” replace “3B” and “11B”. The model shapes are a bit different - larger d_model and smaller
num_heads and d_ff.
Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised
training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5
model. Since t5v1.1 was pre-trained unsupervisedly, there’s no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
google/t5-v1_1-small
google/t5-v1_1-base
google/t5-v1_1-large
google/t5-v1_1-xl
google/t5-v1_1-xxl.
One can refer to T5’s documentation page for all tips, code examples and notebooks.
This model was contributed by patrickvonplaten. The original code can be
found here.
←T5
TAPEX→
T5v1.1
Overview
|
QDQBERT
Overview
The QDQBERT model can be referenced in Integer Quantization for Deep Learning Inference: Principles and Empirical
Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius
Micikevicius.
The abstract from the paper is the following:
Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by
taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of
quantization parameters and evaluate their choices on a wide range of neural network models for different application
domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration
by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is
able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are
more difficult to quantize, such as MobileNets and BERT-large.
Tips:
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer
inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model.
QDQBERT requires the dependency of Pytorch Quantization Toolkit. To install pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com
QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example bert-base-uncased), and
perform Quantization Aware Training/Post Training Quantization.
A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for
SQUAD task can be found at transformers/examples/research_projects/quantization-qdqbert/.
This model was contributed by shangz.
Set default quantizers
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by
TensorQuantizer in Pytorch Quantization Toolkit. TensorQuantizer is the module
for quantizing tensors, with QuantDescriptor defining how the tensor should be quantized. Refer to Pytorch
Quantization Toolkit userguide for more details.
Before creating QDQBERT model, one has to set the default QuantDescriptor defining default tensor quantizers.
Example:
Copied
import pytorch_quantization.nn as quant_nn
from pytorch_quantization.tensor_quant import QuantDescriptor
# The default tensor quantizer is set to use Max calibration method
input_desc = QuantDescriptor(num_bits=8, calib_method="max")
# The default tensor quantizer is set to be per-channel quantization for weights
weight_desc = QuantDescriptor(num_bits=8, axis=((0,)))
quant_nn.QuantLinear.set_default_quant_desc_input(input_desc)
quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc)
Calibration
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for
tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model:
Copied
# Find the TensorQuantizer and enable calibration
for name, module in model.named_modules():
... if name.endswith("_input_quantizer"):
... module.enable_calib()
... module.disable_quant() # Use full precision data to calibrate
# Feeding data samples
model(x)
# ...
# Finalize calibration
for name, module in model.named_modules():
... if name.endswith("_input_quantizer"):
... module.load_calib_amax()
... module.enable_quant()
# If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process
model.cuda()
# Keep running the quantized model
# ...
Export to ONNX
The goal of exporting to ONNX is to deploy inference by TensorRT. Fake
quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of
TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow
the instructions in torch.onnx. Example:
Copied
from pytorch_quantization.nn import TensorQuantizer
TensorQuantizer.use_fb_fake_quant = True
# Load the calibrated model
...
# ONNX export
torch.onnx.export(...)
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
QDQBertConfig
class transformers.QDQBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
use_cache = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the QDQBERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling QDQBertModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling QDQBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
This is the configuration class to store the configuration of a QDQBertModel. It is used to instantiate an
QDQBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the BERT
bert-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import QDQBertModel, QDQBertConfig
# Initializing a QDQBERT bert-base-uncased style configuration
configuration = QDQBertConfig()
# Initializing a model from the bert-base-uncased style configuration
model = QDQBertModel(configuration)
# Accessing the model configuration
configuration = model.config
QDQBertModel
class transformers.QDQBertModel
<
source
>
(
config
add_pooling_layer: bool = True
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare QDQBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The QDQBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
QDQBertLMHeadModel
class transformers.QDQBertLMHeadModel
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
QDQBERT Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.LongTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The QDQBertLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertLMHeadModel, QDQBertConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
config = QDQBertConfig.from_pretrained("bert-base-cased")
config.is_decoder = True
model = QDQBertLMHeadModel.from_pretrained("bert-base-cased", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
QDQBertForMaskedLM
class transformers.QDQBertForMaskedLM
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
QDQBERT Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The QDQBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForMaskedLM.from_pretrained("bert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
QDQBertForSequenceClassification
class transformers.QDQBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The QDQBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, QDQBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = QDQBertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, QDQBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForSequenceClassification.from_pretrained("bert-base-uncased", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = QDQBertForSequenceClassification.from_pretrained(
... "bert-base-uncased", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
QDQBertForNextSentencePrediction
class transformers.QDQBertForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The QDQBertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForNextSentencePrediction.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
QDQBertForMultipleChoice
class transformers.QDQBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The QDQBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForMultipleChoice.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
QDQBertForTokenClassification
class transformers.QDQBertForTokenClassification
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
QDQBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The QDQBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForTokenClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
QDQBertForQuestionAnswering
class transformers.QDQBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (QDQBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
QDQBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (QDQBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The QDQBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, QDQBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = QDQBertForQuestionAnswering.from_pretrained("bert-base-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←ProphetNet
RAG→
QDQBERT
Overview
Set default quantizers
Calibration
Export to ONNX
Documentation resources
QDQBertConfig
QDQBertModel
QDQBertLMHeadModel
QDQBertForMaskedLM
QDQBertForSequenceClassification
QDQBertForNextSentencePrediction
QDQBertForMultipleChoice
QDQBertForTokenClassification
QDQBertForQuestionAnswering
|
LayoutLMv3
Overview
The LayoutLMv3 model was proposed in LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
LayoutLMv3 simplifies LayoutLMv2 by using patch embeddings (as in ViT) instead of leveraging a CNN backbone, and pre-trains the model on 3 objectives: masked language modeling (MLM), masked image modeling (MIM)
and word-patch alignment (WPA).
The abstract from the paper is the following:
Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis.
Tips:
In terms of data processing, LayoutLMv3 is identical to its predecessor LayoutLMv2, except that:images need to be resized and normalized with channels in regular RGB format. LayoutLMv2 on the other hand normalizes the images internally and expects the channels in BGR format.
text is tokenized using byte-pair encoding (BPE), as opposed to WordPiece.
Due to these differences in data preprocessing, one can use LayoutLMv3Processor which internally combines a LayoutLMv3ImageProcessor (for the image modality) and a LayoutLMv3Tokenizer/LayoutLMv3TokenizerFast (for the text modality) to prepare all data for the model.
Regarding usage of LayoutLMv3Processor, we refer to the usage guide of its predecessor.
Demo notebooks for LayoutLMv3 can be found here.
Demo scripts can be found here.
LayoutLMv3 architecture. Taken from the original paper.
This model was contributed by nielsr. The TensorFlow version of this model was added by chriskoo, tokec, and lre. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv3. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LayoutLMv3 is nearly identical to LayoutLMv2, so we’ve also included LayoutLMv2 resources you can adapt for LayoutLMv3 tasks. For these notebooks, take care to use LayoutLMv2Processor instead when preparing data for the model!
Text Classification
LayoutLMv2ForSequenceClassification is supported by this notebook.
Text classification task guide
Token Classification
LayoutLMv3ForTokenClassification is supported by this example script and notebook.
A notebook for how to perform inference with LayoutLMv2ForTokenClassification and a notebook for how to perform inference when no labels are available with LayoutLMv2ForTokenClassification.
A notebook for how to finetune LayoutLMv2ForTokenClassification with the 🤗 Trainer.
Token classification task guide
Question Answering
LayoutLMv2ForQuestionAnswering is supported by this notebook.
Question answering task guide
Document question answering
Document question answering task guide
LayoutLMv3Config
class transformers.LayoutLMv3Config
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-05
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
max_2d_position_embeddings = 1024
coordinate_size = 128
shape_size = 128
has_relative_attention_bias = True
rel_pos_bins = 32
max_rel_pos = 128
rel_2d_pos_bins = 64
max_rel_2d_pos = 256
has_spatial_attention_bias = True
text_embed = True
visual_embed = True
input_size = 224
num_channels = 3
patch_size = 16
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the LayoutLMv3 model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling LayoutLMv3Model.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling LayoutLMv3Model.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
max_2d_position_embeddings (int, optional, defaults to 1024) —
The maximum value that the 2D position embedding might ever be used with. Typically set this to something
large just in case (e.g., 1024).
coordinate_size (int, optional, defaults to 128) —
Dimension of the coordinate embeddings.
shape_size (int, optional, defaults to 128) —
Dimension of the width and height embeddings.
has_relative_attention_bias (bool, optional, defaults to True) —
Whether or not to use a relative attention bias in the self-attention mechanism.
rel_pos_bins (int, optional, defaults to 32) —
The number of relative position bins to be used in the self-attention mechanism.
max_rel_pos (int, optional, defaults to 128) —
The maximum number of relative positions to be used in the self-attention mechanism.
max_rel_2d_pos (int, optional, defaults to 256) —
The maximum number of relative 2D positions in the self-attention mechanism.
rel_2d_pos_bins (int, optional, defaults to 64) —
The number of 2D relative position bins in the self-attention mechanism.
has_spatial_attention_bias (bool, optional, defaults to True) —
Whether or not to use a spatial attention bias in the self-attention mechanism.
visual_embed (bool, optional, defaults to True) —
Whether or not to add patch embeddings.
input_size (int, optional, defaults to 224) —
The size (resolution) of the images.
num_channels (int, optional, defaults to 3) —
The number of channels of the images.
patch_size (int, optional, defaults to 16) —
The size (resolution) of the patches.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a LayoutLMv3Model. It is used to instantiate an
LayoutLMv3 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the LayoutLMv3
microsoft/layoutlmv3-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import LayoutLMv3Config, LayoutLMv3Model
# Initializing a LayoutLMv3 microsoft/layoutlmv3-base style configuration
configuration = LayoutLMv3Config()
# Initializing a model (with random weights) from the microsoft/layoutlmv3-base style configuration
model = LayoutLMv3Model(configuration)
# Accessing the model configuration
configuration = model.config
LayoutLMv3FeatureExtractor
class transformers.LayoutLMv3FeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
LayoutLMv3ImageProcessor
class transformers.LayoutLMv3ImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_value: float = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.Iterable[float]] = None
image_std: typing.Union[float, typing.Iterable[float]] = None
apply_ocr: bool = True
ocr_lang: typing.Optional[str] = None
tesseract_config: typing.Optional[str] = ''
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to (size["height"], size["width"]). Can be
overridden by do_resize in preprocess.
size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after resizing. Can be overridden by size in preprocess.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by resample in preprocess.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image’s pixel values by the specified rescale_value. Can be overridden by
do_rescale in preprocess.
rescale_factor (float, optional, defaults to 1 / 255) —
Value by which the image’s pixel values are rescaled. Can be overridden by rescale_factor in
preprocess.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (Iterable[float] or float, optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (Iterable[float] or float, optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
apply_ocr (bool, optional, defaults to True) —
Whether to apply the Tesseract OCR engine to get words + normalized bounding boxes. Can be overridden by
the apply_ocr parameter in the preprocess method.
ocr_lang (str, optional) —
The language, specified by its ISO code, to be used by the Tesseract OCR engine. By default, English is
used. Can be overridden by the ocr_lang parameter in the preprocess method.
tesseract_config (str, optional) —
Any additional custom configuration flags that are forwarded to the config parameter when calling
Tesseract. For example: ‘—psm 6’. Can be overridden by the tesseract_config parameter in the
preprocess method.
Constructs a LayoutLMv3 image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.Iterable[float]] = None
image_std: typing.Union[float, typing.Iterable[float]] = None
apply_ocr: bool = None
ocr_lang: typing.Optional[str] = None
tesseract_config: typing.Optional[str] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Desired size of the output image after applying resize.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the PILImageResampling filters.
Only has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image pixel values between [0, 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to apply to the image pixel values. Only has an effect if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or Iterable[float], optional, defaults to self.image_mean) —
Mean values to be used for normalization. Only has an effect if do_normalize is set to True.
image_std (float or Iterable[float], optional, defaults to self.image_std) —
Standard deviation values to be used for normalization. Only has an effect if do_normalize is set to
True.
apply_ocr (bool, optional, defaults to self.apply_ocr) —
Whether to apply the Tesseract OCR engine to get words + normalized bounding boxes.
ocr_lang (str, optional, defaults to self.ocr_lang) —
The language, specified by its ISO code, to be used by the Tesseract OCR engine. By default, English is
used.
tesseract_config (str, optional, defaults to self.tesseract_config) —
Any additional custom configuration flags that are forwarded to the config parameter when calling
Tesseract.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
LayoutLMv3Tokenizer
class transformers.LayoutLMv3Tokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = True
cls_token_box = [0, 0, 0, 0]
sep_token_box = [0, 0, 0, 0]
pad_token_box = [0, 0, 0, 0]
pad_token_label = -100
only_label_first_subword = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
cls_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [CLS] token.
sep_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [SEP] token.
pad_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [PAD] token.
pad_token_label (int, optional, defaults to -100) —
The label to use for padding tokens. Defaults to -100, which is the ignore_index of PyTorch’s
CrossEntropyLoss.
only_label_first_subword (bool, optional, defaults to True) —
Whether or not to only label the first subword, in case word labels are provided.
Construct a LayoutLMv3 tokenizer. Based on RoBERTatokenizer (Byte Pair Encoding or BPE).
LayoutLMv3Tokenizer can be used to turn words, word-level bounding boxes and optional word labels to
token-level input_ids, attention_mask, token_type_ids, bbox, and optional labels (for token
classification).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
LayoutLMv3Tokenizer runs end-to-end tokenization: punctuation splitting and wordpiece. It also turns the
word-level bounding boxes into token-level bounding boxes.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]]
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings
(words of a single example or questions of a batch of examples) or a list of list of strings (batch of
words).
text_pair (List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence should be a list of strings
(pretokenized string).
boxes (List[List[int]], List[List[List[int]]]) —
Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale.
word_labels (List[int], List[List[int]], optional) —
Word-level integer labels (for token classification tasks such as FUNSD, CORD).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to
None, this will use the predefined model maximum length if a maximum length is required by one of the
truncation/padding parameters. If the model has no specific maximum input length (like XLNet)
truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LayoutLMv3TokenizerFast
class transformers.LayoutLMv3TokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = True
trim_offsets = True
cls_token_box = [0, 0, 0, 0]
sep_token_box = [0, 0, 0, 0]
pad_token_box = [0, 0, 0, 0]
pad_token_label = -100
only_label_first_subword = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
cls_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [CLS] token.
sep_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [SEP] token.
pad_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [PAD] token.
pad_token_label (int, optional, defaults to -100) —
The label to use for padding tokens. Defaults to -100, which is the ignore_index of PyTorch’s
CrossEntropyLoss.
only_label_first_subword (bool, optional, defaults to True) —
Whether or not to only label the first subword, in case word labels are provided.
Construct a “fast” LayoutLMv3 tokenizer (backed by HuggingFace’s tokenizers library). Based on BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]]
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings
(words of a single example or questions of a batch of examples) or a list of list of strings (batch of
words).
text_pair (List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence should be a list of strings
(pretokenized string).
boxes (List[List[int]], List[List[List[int]]]) —
Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale.
word_labels (List[int], List[List[int]], optional) —
Word-level integer labels (for token classification tasks such as FUNSD, CORD).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to
None, this will use the predefined model maximum length if a maximum length is required by one of the
truncation/padding parameters. If the model has no specific maximum input length (like XLNet)
truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
LayoutLMv3Processor
class transformers.LayoutLMv3Processor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (LayoutLMv3ImageProcessor) —
An instance of LayoutLMv3ImageProcessor. The image processor is a required input.
tokenizer (LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast) —
An instance of LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast. The tokenizer is a required input.
Constructs a LayoutLMv3 processor which combines a LayoutLMv3 image processor and a LayoutLMv3 tokenizer into a
single processor.
LayoutLMv3Processor offers all the functionalities you need to prepare data for the model.
It first uses LayoutLMv3ImageProcessor to resize and normalize document images, and optionally applies OCR to
get words and normalized bounding boxes. These are then provided to LayoutLMv3Tokenizer or
LayoutLMv3TokenizerFast, which turns the words and bounding boxes into token-level input_ids,
attention_mask, token_type_ids, bbox. Optionally, one can provide integer word_labels, which are turned
into token-level labels for token classification tasks (such as FUNSD, CORD).
__call__
<
source
>
(
images
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
This method first forwards the images argument to call(). In case
LayoutLMv3ImageProcessor was initialized with apply_ocr set to True, it passes the obtained words and
bounding boxes along with the additional arguments to call() and returns the output,
together with resized and normalized pixel_values. In case LayoutLMv3ImageProcessor was initialized with
apply_ocr set to False, it passes the words (text/`text_pair) and boxes specified by the user along
with the additional arguments to call() and returns the output, together with
resized and normalized pixel_values.
Please refer to the docstring of the above two methods for more information.
LayoutLMv3Model
class transformers.LayoutLMv3Model
<
source
>
(
config
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LayoutLMv3 Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, token_sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, token_sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (torch.FloatTensor of shape (batch_size, token_sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, token_sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, token_sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, token_sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv3Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv3Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModel
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = AutoModel.from_pretrained("microsoft/layoutlmv3-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
outputs = model(**encoding)
last_hidden_states = outputs.last_hidden_state
LayoutLMv3ForSequenceClassification
class transformers.LayoutLMv3ForSequenceClassification
<
source
>
(
config
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv3 Model with a sequence classification head on top (a linear layer on top of the final hidden state of the
[CLS] token) e.g. for document image classification tasks such as the
RVL-CDIP dataset.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
bbox: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv3Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv3ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModelForSequenceClassification
from datasets import load_dataset
import torch
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = AutoModelForSequenceClassification.from_pretrained("microsoft/layoutlmv3-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
sequence_label = torch.tensor([1])
outputs = model(**encoding, labels=sequence_label)
loss = outputs.loss
logits = outputs.logits
LayoutLMv3ForTokenClassification
class transformers.LayoutLMv3ForTokenClassification
<
source
>
(
config
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv3 Model with a token classification head on top (a linear layer on top of the final hidden states) e.g.
for sequence labeling (information extraction) tasks such as FUNSD,
SROIE, CORD and
Kleister-NDA.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
pixel_values: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv3Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv3ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModelForTokenClassification
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = AutoModelForTokenClassification.from_pretrained("microsoft/layoutlmv3-base", num_labels=7)
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
word_labels = example["ner_tags"]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
outputs = model(**encoding)
loss = outputs.loss
logits = outputs.logits
LayoutLMv3ForQuestionAnswering
class transformers.LayoutLMv3ForQuestionAnswering
<
source
>
(
config
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv3 Model with a span classification head on top for extractive question-answering tasks such as
DocVQA (a linear layer on top of the text part of the hidden-states output to
compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
bbox: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv3Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv3ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModelForQuestionAnswering
from datasets import load_dataset
import torch
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = AutoModelForQuestionAnswering.from_pretrained("microsoft/layoutlmv3-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
question = "what's his name?"
words = example["tokens"]
boxes = example["bboxes"]
encoding = processor(image, question, words, boxes=boxes, return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**encoding, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
TFLayoutLMv3Model
class transformers.TFLayoutLMv3Model
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LayoutLMv3 Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
bbox: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
pixel_values: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMv3Config) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMv3Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, TFAutoModel
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = TFAutoModel.from_pretrained("microsoft/layoutlmv3-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
encoding = processor(image, words, boxes=boxes, return_tensors="tf")
outputs = model(**encoding)
last_hidden_states = outputs.last_hidden_state
TFLayoutLMv3ForSequenceClassification
class transformers.TFLayoutLMv3ForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv3 Model with a sequence classification head on top (a linear layer on top of the final hidden state of the
[CLS] token) e.g. for document image classification tasks such as the
RVL-CDIP dataset.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
bbox: tf.Tensor | None = None
pixel_values: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMv3Config) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMv3ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, TFAutoModelForSequenceClassification
from datasets import load_dataset
import tensorflow as tf
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = TFAutoModelForSequenceClassification.from_pretrained("microsoft/layoutlmv3-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
encoding = processor(image, words, boxes=boxes, return_tensors="tf")
sequence_label = tf.convert_to_tensor([1])
outputs = model(**encoding, labels=sequence_label)
loss = outputs.loss
logits = outputs.logits
TFLayoutLMv3ForTokenClassification
class transformers.TFLayoutLMv3ForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv3 Model with a token classification head on top (a linear layer on top of the final hidden states) e.g.
for sequence labeling (information extraction) tasks such as FUNSD,
SROIE, CORD and
Kleister-NDA.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
bbox: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
pixel_values: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMv3Config) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMv3ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, TFAutoModelForTokenClassification
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = TFAutoModelForTokenClassification.from_pretrained("microsoft/layoutlmv3-base", num_labels=7)
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
words = example["tokens"]
boxes = example["bboxes"]
word_labels = example["ner_tags"]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="tf")
outputs = model(**encoding)
loss = outputs.loss
logits = outputs.logits
TFLayoutLMv3ForQuestionAnswering
class transformers.TFLayoutLMv3ForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMv3Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv3 Model with a span classification head on top for extractive question-answering tasks such as
DocVQA (a linear layer on top of the text part of the hidden-states output to
compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
start_positions: tf.Tensor | None = None
end_positions: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
bbox: tf.Tensor | None = None
pixel_values: tf.Tensor | None = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Batch of document images. Each image is divided into patches of shape (num_channels, config.patch_size, config.patch_size) and the total number of patches (=patch_sequence_length) equals to ((height / config.patch_size) * (width / config.patch_size)).
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
Note that sequence_length = token_sequence_length + patch_sequence_length + 1 where 1 is for [CLS]
token. See pixel_values for patch_sequence_length.
What are position IDs?
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMv3Config) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMv3ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, TFAutoModelForQuestionAnswering
from datasets import load_dataset
import tensorflow as tf
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
model = TFAutoModelForQuestionAnswering.from_pretrained("microsoft/layoutlmv3-base")
dataset = load_dataset("nielsr/funsd-layoutlmv3", split="train")
example = dataset[0]
image = example["image"]
question = "what's his name?"
words = example["tokens"]
boxes = example["bboxes"]
encoding = processor(image, question, words, boxes=boxes, return_tensors="tf")
start_positions = tf.convert_to_tensor([1])
end_positions = tf.convert_to_tensor([3])
outputs = model(**encoding, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←LayoutLMV2
LayoutXLM→
LayoutLMv3
Overview
Resources
LayoutLMv3Config
LayoutLMv3FeatureExtractor
LayoutLMv3ImageProcessor
LayoutLMv3Tokenizer
LayoutLMv3TokenizerFast
LayoutLMv3Processor
LayoutLMv3Model
LayoutLMv3ForSequenceClassification
LayoutLMv3ForTokenClassification
LayoutLMv3ForQuestionAnswering
TFLayoutLMv3Model
TFLayoutLMv3ForSequenceClassification
TFLayoutLMv3ForTokenClassification
TFLayoutLMv3ForQuestionAnswering
|
GLPN
This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The GLPN model was proposed in Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
GLPN combines SegFormer’s hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably
less computational complexity.
The abstract from the paper is the following:
Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models.
Tips:
One can use GLPNImageProcessor to prepare images for the model.
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN.
Demo notebooks for GLPNForDepthEstimation can be found here.
Monocular depth estimation task guide
GLPNConfig
class transformers.GLPNConfig
<
source
>
(
num_channels = 3
num_encoder_blocks = 4
depths = [2, 2, 2, 2]
sr_ratios = [8, 4, 2, 1]
hidden_sizes = [32, 64, 160, 256]
patch_sizes = [7, 3, 3, 3]
strides = [4, 2, 2, 2]
num_attention_heads = [1, 2, 5, 8]
mlp_ratios = [4, 4, 4, 4]
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
drop_path_rate = 0.1
layer_norm_eps = 1e-06
decoder_hidden_size = 64
max_depth = 10
head_in_index = -1
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_encoder_blocks (int, optional, defaults to 4) —
The number of encoder blocks (i.e. stages in the Mix Transformer encoder).
depths (List[int], optional, defaults to [2, 2, 2, 2]) —
The number of layers in each encoder block.
sr_ratios (List[int], optional, defaults to [8, 4, 2, 1]) —
Sequence reduction ratios in each encoder block.
hidden_sizes (List[int], optional, defaults to [32, 64, 160, 256]) —
Dimension of each of the encoder blocks.
patch_sizes (List[int], optional, defaults to [7, 3, 3, 3]) —
Patch size before each encoder block.
strides (List[int], optional, defaults to [4, 2, 2, 2]) —
Stride before each encoder block.
num_attention_heads (List[int], optional, defaults to [1, 2, 4, 8]) —
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (List[int], optional, defaults to [4, 4, 4, 4]) —
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
drop_path_rate (float, optional, defaults to 0.1) —
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
decoder_hidden_size (int, optional, defaults to 32) —
The dimension of the decoder.
max_depth (int, optional, defaults to 10) —
The maximum depth of the decoder.
head_in_index (int, optional, defaults to -1) —
The index of the features to use in the head.
This is the configuration class to store the configuration of a GLPNModel. It is used to instantiate an GLPN
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GLPN
vinvino02/glpn-kitti architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GLPNModel, GLPNConfig
# Initializing a GLPN vinvino02/glpn-kitti style configuration
configuration = GLPNConfig()
# Initializing a model from the vinvino02/glpn-kitti style configuration
model = GLPNModel(configuration)
# Accessing the model configuration
configuration = model.config
GLPNFeatureExtractor
class transformers.GLPNFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
GLPNImageProcessor
class transformers.GLPNImageProcessor
<
source
>
(
do_resize: bool = True
size_divisor: int = 32
resample = <Resampling.BILINEAR: 2>
do_rescale: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions, rounding them down to the closest multiple of
size_divisor. Can be overridden by do_resize in preprocess.
size_divisor (int, optional, defaults to 32) —
When do_resize is True, images are resized so their height and width are rounded down to the closest
multiple of size_divisor. Can be overridden by size_divisor in preprocess.
resample (PIL.Image resampling filter, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by resample in preprocess.
do_rescale (bool, optional, defaults to True) —
Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.). Can be
overridden by do_rescale in preprocess.
Constructs a GLPN image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), transformers.utils.generic.TensorType, typing.List[ForwardRef('PIL.Image.Image')], typing.List[transformers.utils.generic.TensorType]]
do_resize: typing.Optional[bool] = None
size_divisor: typing.Optional[int] = None
resample = None
do_rescale: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (PIL.Image.Image or TensorType or List[np.ndarray] or List[TensorType]) —
The image or images to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the input such that the (height, width) dimensions are a multiple of size_divisor.
size_divisor (int, optional, defaults to self.size_divisor) —
When do_resize is True, images are resized so their height and width are rounded down to the
closest multiple of size_divisor.
resample (PIL.Image resampling filter, optional, defaults to self.resample) —
PIL.Image resampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.).
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
None: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess the given images.
GLPNModel
class transformers.GLPNModel
<
source
>
(
config
)
Parameters
config (GLPNConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GLPN encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See GLPNImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GLPNConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GLPNModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, GLPNModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("vinvino02/glpn-kitti")
model = GLPNModel.from_pretrained("vinvino02/glpn-kitti")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 512, 15, 20]
GLPNForDepthEstimation
class transformers.GLPNForDepthEstimation
<
source
>
(
config
)
Parameters
config (GLPNConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GLPN Model transformer with a lightweight depth estimation head on top e.g. for KITTI, NYUv2.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
labels: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.DepthEstimatorOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See GLPNImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.FloatTensor of shape (batch_size, height, width), optional) —
Ground truth depth estimation maps for computing the loss.
Returns
transformers.modeling_outputs.DepthEstimatorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.DepthEstimatorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GLPNConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
predicted_depth (torch.FloatTensor of shape (batch_size, height, width)) — Predicted depth for each pixel.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GLPNForDepthEstimation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, GLPNForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("vinvino02/glpn-kitti")
model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-kitti")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
... predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
... predicted_depth.unsqueeze(1),
... size=image.size[::-1],
... mode="bicubic",
... align_corners=False,
... )
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
←FocalNet
ImageGPT→
GLPN
Overview
Resources
GLPNConfig
GLPNFeatureExtractor
GLPNImageProcessor
GLPNModel
GLPNForDepthEstimation
|
RoBERTa
Overview
The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer
Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018.
It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with
much larger mini-batches and learning rates.
The abstract from the paper is the following:
Language model pretraining has led to significant performance gains but careful comparison between different
approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes,
and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication
study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and
training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every
model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results
highlight the importance of previously overlooked design choices, and raise questions about the source of recently
reported improvements. We release our models and code.
Tips:
This implementation is the same as BertModel with a tiny embeddings tweak as well as a setup
for Roberta pretrained models.
RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
different pretraining scheme.
RoBERTa doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or </s>)
Same as BERT with better pretraining tricks:
dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all
together to reach 512 tokens (so the sentences are in an order than may span several documents)
train with larger batches
use BPE with bytes as a subunit and not characters (because of unicode characters)
CamemBERT is a wrapper around RoBERTa. Refer to this page for usage examples.
This model was contributed by julien-c. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RoBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog on Getting Started with Sentiment Analysis on Twitter using RoBERTa and the Inference API.
A blog on Opinion Classification with Kili and Hugging Face AutoTrain using RoBERTa.
A notebook on how to finetune RoBERTa for sentiment analysis. 🌎
RobertaForSequenceClassification is supported by this example script and notebook.
TFRobertaForSequenceClassification is supported by this example script and notebook.
FlaxRobertaForSequenceClassification is supported by this example script and notebook.
Text classification task guide
Token Classification
RobertaForTokenClassification is supported by this example script and notebook.
TFRobertaForTokenClassification is supported by this example script and notebook.
FlaxRobertaForTokenClassification is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide
Fill-Mask
A blog on How to train a new language model from scratch using Transformers and Tokenizers with RoBERTa.
RobertaForMaskedLM is supported by this example script and notebook.
TFRobertaForMaskedLM is supported by this example script and notebook.
FlaxRobertaForMaskedLM is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Question Answering
A blog on Accelerated Inference with Optimum and Transformers Pipelines with RoBERTa for question answering.
RobertaForQuestionAnswering is supported by this example script and notebook.
TFRobertaForQuestionAnswering is supported by this example script and notebook.
FlaxRobertaForQuestionAnswering is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice
RobertaForMultipleChoice is supported by this example script and notebook.
TFRobertaForMultipleChoice is supported by this example script and notebook.
Multiple choice task guide
RobertaConfig
class transformers.RobertaConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the RoBERTa model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling RobertaModel or TFRobertaModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling RobertaModel or TFRobertaModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a RobertaModel or a TFRobertaModel. It is
used to instantiate a RoBERTa model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the RoBERTa
roberta-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import RobertaConfig, RobertaModel
# Initializing a RoBERTa configuration
configuration = RobertaConfig()
# Initializing a model (with random weights) from the configuration
model = RobertaModel(configuration)
# Accessing the model configuration
configuration = model.config
RobertaTokenizer
class transformers.RobertaTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
Constructs a RoBERTa tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
RobertaTokenizerFast
class transformers.RobertaTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” RoBERTa tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
RobertaModel
class transformers.RobertaModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The RobertaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaModel
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = RobertaModel.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
RobertaForCausalLM
class transformers.RobertaForCausalLM
<
source
>
(
config
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The RobertaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
config = AutoConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaForCausalLM.from_pretrained("roberta-base", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
RobertaForMaskedLM
class transformers.RobertaForMaskedLM
<
source
>
(
config
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = RobertaForMaskedLM.from_pretrained("roberta-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.1
RobertaForSequenceClassification
class transformers.RobertaForSequenceClassification
<
source
>
(
config
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, RobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = RobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'optimism'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.08
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, RobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = RobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RobertaForSequenceClassification.from_pretrained(
... "cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
RobertaForMultipleChoice
class transformers.RobertaForMultipleChoice
<
source
>
(
config
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = RobertaForMultipleChoice.from_pretrained("roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
RobertaForTokenClassification
class transformers.RobertaForTokenClassification
<
source
>
(
config
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = RobertaForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.01
RobertaForQuestionAnswering
class transformers.RobertaForQuestionAnswering
<
source
>
(
config
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
model = RobertaForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
' puppet'
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
0.86
TFRobertaModel
class transformers.TFRobertaModel
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFRobertaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = TFRobertaModel.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFRobertaForCausalLM
class transformers.TFRobertaForCausalLM
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The TFRobertaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = TFRobertaForCausalLM.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFRobertaForMaskedLM
class transformers.TFRobertaForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = TFRobertaForMaskedLM.from_pretrained("roberta-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-<mask> tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.1
TFRobertaForSequenceClassification
class transformers.TFRobertaForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = TFRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'optimism'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
0.08
TFRobertaForMultipleChoice
class transformers.TFRobertaForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = TFRobertaForMultipleChoice.from_pretrained("roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFRobertaForTokenClassification
class transformers.TFRobertaForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-large-ner-english")
model = TFRobertaForTokenClassification.from_pretrained("ydshieh/roberta-large-ner-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_tokens_classes
['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
round(float(loss), 2)
0.01
TFRobertaForQuestionAnswering
class transformers.TFRobertaForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-base-squad2")
model = TFRobertaForQuestionAnswering.from_pretrained("ydshieh/roberta-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
' puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
0.86
FlaxRobertaModel
class transformers.FlaxRobertaModel
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaModel
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaModel.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxRobertaForCausalLM
class transformers.FlaxRobertaForCausalLM
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for
autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaForCausalLM
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaForCausalLM.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
FlaxRobertaForMaskedLM
class transformers.FlaxRobertaForMaskedLM
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaForMaskedLM.from_pretrained("roberta-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRobertaForSequenceClassification
class transformers.FlaxRobertaForSequenceClassification
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaForSequenceClassification.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRobertaForMultipleChoice
class transformers.FlaxRobertaForMultipleChoice
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaForMultipleChoice.from_pretrained("roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxRobertaForTokenClassification
class transformers.FlaxRobertaForTokenClassification
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaForTokenClassification.from_pretrained("roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRobertaForQuestionAnswering
class transformers.FlaxRobertaForQuestionAnswering
<
source
>
(
config: RobertaConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = FlaxRobertaForQuestionAnswering.from_pretrained("roberta-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←RetriBERT
RoBERTa-PreLayerNorm→
RoBERTa
Overview
Resources
RobertaConfig
RobertaTokenizer
RobertaTokenizerFast
RobertaModel
RobertaForCausalLM
RobertaForMaskedLM
RobertaForSequenceClassification
RobertaForMultipleChoice
RobertaForTokenClassification
RobertaForQuestionAnswering
TFRobertaModel
TFRobertaForCausalLM
TFRobertaForMaskedLM
TFRobertaForSequenceClassification
TFRobertaForMultipleChoice
TFRobertaForTokenClassification
TFRobertaForQuestionAnswering
FlaxRobertaModel
FlaxRobertaForCausalLM
FlaxRobertaForMaskedLM
FlaxRobertaForSequenceClassification
FlaxRobertaForMultipleChoice
FlaxRobertaForTokenClassification
FlaxRobertaForQuestionAnswering
|
LXMERT
Overview
The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from Transformers by Hao Tan & Mohit Bansal. It is a series of bidirectional transformer encoders
(one for the vision modality, one for the language modality, and then one to fuse both modalities) pretrained using a
combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked
visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. The pretraining
consists of multiple multi-modal datasets: MSCOCO, Visual-Genome + Visual-Genome Question Answering, VQA 2.0, and GQA.
The abstract from the paper is the following:
Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly,
the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality
Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we
build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language
encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language
semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative
pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification),
cross-modality matching, and image question answering. These tasks help in learning both intra-modality and
cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art
results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our
pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR, and improve the previous
best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel
model components and pretraining strategies significantly contribute to our strong results; and also present several
attention visualizations for the different encoders
Tips:
Bounding boxes are not necessary to be used in the visual feature embeddings, any kind of visual-spacial features
will work.
Both the language hidden states and the visual hidden states that LXMERT outputs are passed through the
cross-modality layer, so they contain information from both modalities. To access a modality that only attends to
itself, select the vision/language hidden states from the first input in the tuple.
The bidirectional cross-modality encoder attention only returns attention values when the language modality is used
as the input and the vision modality is used as the context vector. Further, while the cross-modality encoder
contains self-attention for each respective modality and cross-attention, only the cross attention is returned and
both self attention outputs are disregarded.
This model was contributed by eltoto1219. The original code can be found here.
Documentation resources
Question answering task guide
LxmertConfig
class transformers.LxmertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_attention_heads = 12
num_qa_labels = 9500
num_object_labels = 1600
num_attr_labels = 400
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
l_layers = 9
x_layers = 5
r_layers = 5
visual_feat_dim = 2048
visual_pos_dim = 4
visual_loss_normalizer = 6.67
task_matched = True
task_mask_lm = True
task_obj_predict = True
task_qa = True
visual_obj_loss = True
visual_attr_loss = True
visual_feat_loss = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the LXMERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LxmertModel or TFLxmertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
r_layers (int, optional, defaults to 5) —
Number of hidden layers in the Transformer visual encoder.
l_layers (int, optional, defaults to 9) —
Number of hidden layers in the Transformer language encoder.
x_layers (int, optional, defaults to 5) —
Number of hidden layers in the Transformer cross modality encoder.
num_attention_heads (int, optional, defaults to 5) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed into BertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
visual_feat_dim (int, optional, defaults to 2048) —
This represents the last dimension of the pooled-object features used as input for the model, representing
the size of each object feature itself.
visual_pos_dim (int, optional, defaults to 4) —
This represents the number of spacial features that are mixed into the visual features. The default is set
to 4 because most commonly this will represent the location of a bounding box. i.e., (x, y, width, height)
visual_loss_normalizer (float, optional, defaults to 1/15) —
This represents the scaling factor in which each visual loss is multiplied by if during pretraining, one
decided to train with multiple vision-based loss objectives.
num_qa_labels (int, optional, defaults to 9500) —
This represents the total number of different question answering (QA) labels there are. If using more than
one dataset with QA, the user will need to account for the total number of labels that all of the datasets
have in total.
num_object_labels (int, optional, defaults to 1600) —
This represents the total number of semantically unique objects that lxmert will be able to classify a
pooled-object feature as belonging too.
num_attr_labels (int, optional, defaults to 400) —
This represents the total number of semantically unique attributes that lxmert will be able to classify a
pooled-object feature as possessing.
task_matched (bool, optional, defaults to True) —
This task is used for sentence-image matching. If the sentence correctly describes the image the label will
be 1. If the sentence does not correctly describe the image, the label will be 0.
task_mask_lm (bool, optional, defaults to True) —
Whether or not to add masked language modeling (as used in pretraining models such as BERT) to the loss
objective.
task_obj_predict (bool, optional, defaults to True) —
Whether or not to add object prediction, attribute prediction and feature regression to the loss objective.
task_qa (bool, optional, defaults to True) —
Whether or not to add the question-answering loss to the objective
visual_obj_loss (bool, optional, defaults to True) —
Whether or not to calculate the object-prediction loss objective
visual_attr_loss (bool, optional, defaults to True) —
Whether or not to calculate the attribute-prediction loss objective
visual_feat_loss (bool, optional, defaults to True) —
Whether or not to calculate the feature-regression loss objective
output_attentions (bool, optional, defaults to False) —
Whether or not the model should return the attentions from the vision, language, and cross-modality layers
should be returned.
output_hidden_states (bool, optional, defaults to False) —
Whether or not the model should return the hidden states from the vision, language, and cross-modality
layers should be returned.
This is the configuration class to store the configuration of a LxmertModel or a TFLxmertModel. It is used
to instantiate a LXMERT model according to the specified arguments, defining the model architecture. Instantiating
a configuration with the defaults will yield a similar configuration to that of the Lxmert
unc-nlp/lxmert-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
LxmertTokenizer
class transformers.LxmertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original Lxmert).
Construct a Lxmert tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Lxmert sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Lxmert
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
LxmertTokenizerFast
class transformers.LxmertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original Lxmert).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” Lxmert tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Lxmert sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Lxmert
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
Lxmert specific outputs
class transformers.models.lxmert.modeling_lxmert.LxmertModelOutput
<
source
>
(
language_output: typing.Optional[torch.FloatTensor] = None
vision_output: typing.Optional[torch.FloatTensor] = None
pooled_output: typing.Optional[torch.FloatTensor] = None
language_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
vision_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
language_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
vision_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
language_output (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the language encoder.
vision_output (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the visual encoder.
pooled_output (torch.FloatTensor of shape (batch_size, hidden_size)) —
Last layer hidden-state of the first token of the sequence (classification, CLS, token) further processed
by a Linear layer and a Tanh activation function. The Linear
language_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
language_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Lxmert’s outputs that contain the last hidden states, pooled outputs, and attention probabilities for the language,
visual, and, cross-modality encoders. (note: the visual encoder in Lxmert is referred to as the “relation-ship”
encoder”)
class transformers.models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
prediction_logits: typing.Optional[torch.FloatTensor] = None
cross_relationship_score: typing.Optional[torch.FloatTensor] = None
question_answering_score: typing.Optional[torch.FloatTensor] = None
language_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
vision_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
language_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
vision_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
cross_relationship_score (torch.FloatTensor of shape (batch_size, 2)) —
Prediction scores of the textual matching objective (classification) head (scores of True/False
continuation before SoftMax).
question_answering_score (torch.FloatTensor of shape (batch_size, n_qa_answers)) —
Prediction scores of question answering objective (classification).
language_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
language_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Output type of LxmertForPreTraining.
class transformers.models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
question_answering_score: typing.Optional[torch.FloatTensor] = None
language_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
vision_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
language_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
vision_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.k.
question_answering_score (torch.FloatTensor of shape (batch_size, n_qa_answers), optional) —
Prediction scores of question answering objective (classification).
language_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
language_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Output type of LxmertForQuestionAnswering.
class transformers.models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput
<
source
>
(
language_output: tf.Tensor | None = None
vision_output: tf.Tensor | None = None
pooled_output: tf.Tensor | None = None
language_hidden_states: Tuple[tf.Tensor] | None = None
vision_hidden_states: Tuple[tf.Tensor] | None = None
language_attentions: Tuple[tf.Tensor] | None = None
vision_attentions: Tuple[tf.Tensor] | None = None
cross_encoder_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
language_output (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the language encoder.
vision_output (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the visual encoder.
pooled_output (tf.Tensor of shape (batch_size, hidden_size)) —
Last layer hidden-state of the first token of the sequence (classification, CLS, token) further processed
by a Linear layer and a Tanh activation function. The Linear
language_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
language_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Lxmert’s outputs that contain the last hidden states, pooled outputs, and attention probabilities for the language,
visual, and, cross-modality encoders. (note: the visual encoder in Lxmert is referred to as the “relation-ship”
encoder”)
class transformers.models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput
<
source
>
(
loss: tf.Tensor | None = None
prediction_logits: tf.Tensor | None = None
cross_relationship_score: tf.Tensor | None = None
question_answering_score: tf.Tensor | None = None
language_hidden_states: Tuple[tf.Tensor] | None = None
vision_hidden_states: Tuple[tf.Tensor] | None = None
language_attentions: Tuple[tf.Tensor] | None = None
vision_attentions: Tuple[tf.Tensor] | None = None
cross_encoder_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
cross_relationship_score (tf.Tensor of shape (batch_size, 2)) —
Prediction scores of the textual matching objective (classification) head (scores of True/False
continuation before SoftMax).
question_answering_score (tf.Tensor of shape (batch_size, n_qa_answers)) —
Prediction scores of question answering objective (classification).
language_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
language_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Output type of LxmertForPreTraining.
LxmertModel
class transformers.LxmertModel
<
source
>
(
config
)
Parameters
config (LxmertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Lxmert Model transformer outputting raw hidden-states without any specific head on top.
The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from
Transformers by Hao Tan and Mohit Bansal. It’s a vision and language transformer
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
visual_feats: typing.Optional[torch.FloatTensor] = None
visual_pos: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.lxmert.modeling_lxmert.LxmertModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
visual_feats (torch.FloatTensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents visual features. They ROI pooled object features from bounding boxes using a
faster-RCNN model)
These are currently not provided by the transformers library.
visual_pos (torch.FloatTensor of shape (batch_size, num_visual_features, visual_pos_dim)) —
This input represents spacial features corresponding to their relative (via index) visual features. The
pre-trained LXMERT model expects these spacial features to be normalized bounding boxes on a scale of 0 to
These are currently not provided by the transformers library.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.lxmert.modeling_lxmert.LxmertModelOutput or tuple(torch.FloatTensor)
A transformers.models.lxmert.modeling_lxmert.LxmertModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LxmertConfig) and inputs.
language_output (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the language encoder.
vision_output (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the visual encoder.
pooled_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification, CLS, token) further processed
by a Linear layer and a Tanh activation function. The Linear
language_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
language_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The LxmertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LxmertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("unc-nlp/lxmert-base-uncased")
model = LxmertModel.from_pretrained("unc-nlp/lxmert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
LxmertForPreTraining
class transformers.LxmertForPreTraining
<
source
>
(
config
)
Parameters
config (LxmertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Lxmert Model with a specified pretraining head on top.
The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from
Transformers by Hao Tan and Mohit Bansal. It’s a vision and language transformer
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
visual_feats: typing.Optional[torch.FloatTensor] = None
visual_pos: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
obj_labels: typing.Union[typing.Dict[str, typing.Tuple[torch.FloatTensor, torch.FloatTensor]], NoneType] = None
matched_label: typing.Optional[torch.LongTensor] = None
ans: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
visual_feats (torch.FloatTensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents visual features. They ROI pooled object features from bounding boxes using a
faster-RCNN model)
These are currently not provided by the transformers library.
visual_pos (torch.FloatTensor of shape (batch_size, num_visual_features, visual_pos_dim)) —
This input represents spacial features corresponding to their relative (via index) visual features. The
pre-trained LXMERT model expects these spacial features to be normalized bounding boxes on a scale of 0 to
These are currently not provided by the transformers library.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
obj_labels (Dict[Str -- Tuple[Torch.FloatTensor, Torch.FloatTensor]], optional):
each key is named after each one of the visual losses and each element of the tuple is of the shape
(batch_size, num_features) and (batch_size, num_features, visual_feature_dim) for each the label id and
the label score respectively
matched_label (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the whether or not the text input matches the image (classification) loss. Input
should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates that the sentence does not match the image,
1 indicates that the sentence does match the image.
ans (Torch.Tensor of shape (batch_size), optional) —
a one hot representation hof the correct answer optional
Returns
transformers.models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LxmertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
cross_relationship_score (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the textual matching objective (classification) head (scores of True/False
continuation before SoftMax).
question_answering_score (torch.FloatTensor of shape (batch_size, n_qa_answers)) — Prediction scores of question answering objective (classification).
language_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
language_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The LxmertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
LxmertForQuestionAnswering
class transformers.LxmertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (LxmertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Lxmert Model with a visual-answering head on top for downstream QA tasks
The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from
Transformers by Hao Tan and Mohit Bansal. It’s a vision and language transformer
model, pretrained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
visual_feats: typing.Optional[torch.FloatTensor] = None
visual_pos: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
visual_feats (torch.FloatTensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents visual features. They ROI pooled object features from bounding boxes using a
faster-RCNN model)
These are currently not provided by the transformers library.
visual_pos (torch.FloatTensor of shape (batch_size, num_visual_features, visual_pos_dim)) —
This input represents spacial features corresponding to their relative (via index) visual features. The
pre-trained LXMERT model expects these spacial features to be normalized bounding boxes on a scale of 0 to
These are currently not provided by the transformers library.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (Torch.Tensor of shape (batch_size), optional) —
A one-hot representation of the correct answer
Returns
transformers.models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput or tuple(torch.FloatTensor)
A transformers.models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LxmertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.k.
question_answering_score (torch.FloatTensor of shape (batch_size, n_qa_answers), optional) — Prediction scores of question answering objective (classification).
language_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for input features + one for the output of each cross-modality layer) of
shape (batch_size, sequence_length, hidden_size).
language_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The LxmertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LxmertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("unc-nlp/lxmert-base-uncased")
model = LxmertForQuestionAnswering.from_pretrained("unc-nlp/lxmert-base-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFLxmertModel
class transformers.TFLxmertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (LxmertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Lxmert Model transformer outputting raw hidden-states without any specific head on top.
The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from
Transformers by Hao Tan and Mohit Bansal. It’s a vision and language transformer
model, pre-trained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MCSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
visual_feats: tf.Tensor | None = None
visual_pos: tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
visual_attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
visual_feats (tf.Tensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents visual features. They ROI pooled object features from bounding boxes using a
faster-RCNN model)
These are currently not provided by the transformers library.
visual_pos (tf.Tensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents spacial features corresponding to their relative (via index) visual features. The
pre-trained LXMERT model expects these spacial features to be normalized bounding boxes on a scale of 0 to
These are currently not provided by the transformers library.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
MMask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput or tuple(tf.Tensor)
A transformers.models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LxmertConfig) and inputs.
language_output (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the language encoder.
vision_output (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the visual encoder.
pooled_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification, CLS, token) further processed
by a Linear layer and a Tanh activation function. The Linear
language_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
language_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TFLxmertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLxmertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("unc-nlp/lxmert-base-uncased")
model = TFLxmertModel.from_pretrained("unc-nlp/lxmert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFLxmertForPreTraining
class transformers.TFLxmertForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (LxmertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Lxmert Model with a language modeling head on top.
The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from
Transformers by Hao Tan and Mohit Bansal. It’s a vision and language transformer
model, pre-trained on a variety of multi-modal datasets comprising of GQA, VQAv2.0, MCSCOCO captions, and Visual
genome, using a combination of masked language modeling, region of interest feature regression, cross entropy loss
for question answering attribute prediction, and object tag prediction.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids = None
visual_feats = None
visual_pos = None
attention_mask = None
visual_attention_mask = None
token_type_ids = None
inputs_embeds = None
masked_lm_labels = None
obj_labels = None
matched_label = None
ans = None
output_attentions = None
output_hidden_states = None
return_dict = None
training = False
)
→
transformers.models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
visual_feats (tf.Tensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents visual features. They ROI pooled object features from bounding boxes using a
faster-RCNN model)
These are currently not provided by the transformers library.
visual_pos (tf.Tensor of shape (batch_size, num_visual_features, visual_feat_dim)) —
This input represents spacial features corresponding to their relative (via index) visual features. The
pre-trained LXMERT model expects these spacial features to be normalized bounding boxes on a scale of 0 to
These are currently not provided by the transformers library.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
MMask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
masked_lm_labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
obj_labels (Dict[Str -- Tuple[tf.Tensor, tf.Tensor]], optional, defaults to None):
each key is named after each one of the visual losses and each element of the tuple is of the shape
(batch_size, num_features) and (batch_size, num_features, visual_feature_dim) for each the label id and
the label score respectively
matched_label (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the whether or not the text input matches the image (classification) loss. Input
should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates that the sentence does not match the image,
1 indicates that the sentence does match the image.
ans (Torch.Tensor of shape (batch_size), optional, defaults to None) —
a one hot representation hof the correct answer optional
Returns
transformers.models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LxmertConfig) and inputs.
loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
cross_relationship_score (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the textual matching objective (classification) head (scores of True/False
continuation before SoftMax).
question_answering_score (tf.Tensor of shape (batch_size, n_qa_answers)) — Prediction scores of question answering objective (classification).
language_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
vision_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for input features + one for the output of each cross-modality layer) of shape
(batch_size, sequence_length, hidden_size).
language_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
vision_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TFLxmertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←LiLT
MatCha→
LXMERT
Overview
Documentation resources
LxmertConfig
LxmertTokenizer
LxmertTokenizerFast
Lxmert specific outputs
LxmertModel
LxmertForPreTraining
LxmertForQuestionAnswering
TFLxmertModel
TFLxmertForPreTraining
|
LayoutLMV2
Overview
The LayoutLMV2 model was proposed in LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu,
Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves LayoutLM to obtain
state-of-the-art results across several document image understanding benchmarks:
information extraction from scanned documents: the FUNSD dataset (a
collection of 199 annotated forms comprising more than 30,000 words), the CORD
dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the SROIE dataset (a collection of 626 receipts for training and 347 receipts for testing)
and the Kleister-NDA dataset (a collection of non-disclosure
agreements from the EDGAR database, including 254 documents for training, 83 documents for validation, and 203
documents for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
document visual question answering: the DocVQA dataset (a collection of 50,000
questions defined on 12,000+ document images).
The abstract from the paper is the following:
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to
its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this
paper, we present LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model
architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked
visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training
stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention
mechanism into the Transformer architecture, so that the model can fully understand the relative positional
relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and
achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks,
including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852),
RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at
this https URL.
LayoutLMv2 depends on detectron2, torchvision and tesseract. Run the
following to install them:
Copied
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
python -m pip install torchvision tesseract
(If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.)
Tips:
The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during
pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning).
LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in
the self-attention layers. Details can be found on page 5 of the paper.
Demo notebooks on how to use the LayoutLMv2 model on RVL-CDIP, FUNSD, DocVQA, CORD can be found here.
LayoutLMv2 uses Facebook AI’s Detectron2 package for its visual
backbone. See this link for installation
instructions.
In addition to input_ids, forward() expects 2 additional inputs, namely
image and bbox. The image input corresponds to the original document image in which the text
tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of
document images, image should be a tensor of shape (batch_size, 3, 224, 224). This can be either a
torch.Tensor or a Detectron2.structures.ImageList. You don’t need to normalize the channels, as this is
done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models
in Detectron2 are pre-trained using the BGR format. The bbox input are the bounding boxes (i.e. 2D-positions)
of the input text tokens. This is identical to LayoutLMModel. These can be obtained using an
external OCR engine such as Google’s Tesseract (there’s a Python
wrapper available). Each bounding box should be in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1)
represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on
a 0-1000 scale. To normalize, you can use the following function:
Copied
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs (before resizing the image). Those can be obtained using the Python Image Library (PIL) library for example, as
follows:
Copied
from PIL import Image
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
)
width, height = image.size
However, this model includes a brand new LayoutLMv2Processor which can be used to directly
prepare data for the model (including applying OCR under the hood). More information can be found in the “Usage”
section below.
Internally, LayoutLMv2Model will send the image input through its visual backbone to
obtain a lower-resolution feature map, whose shape is equal to the image_feature_pool_shape attribute of
LayoutLMv2Config. This feature map is then flattened to obtain a sequence of image tokens. As
the size of the feature map is 7x7 by default, one obtains 49 image tokens. These are then concatenated with the text
tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a
length of 512 + 49 = 561, if you pad the text tokens up to the max length. More generally, the last hidden states
will have a shape of seq_length + image_feature_pool_shape[0] *
config.image_feature_pool_shape[1].
When calling from_pretrained(), a warning will be printed with a long list of
parameter names that are not initialized. This is not a problem, as these parameters are batch normalization
statistics, which are going to have values when fine-tuning on a custom dataset.
If you want to train the model in a distributed environment, make sure to call synchronize_batch_norm on the
model in order to properly synchronize the batch normalization layers of the visual backbone.
In addition, there’s LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on
LayoutXLM’s documentation page.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A notebook on how to finetune LayoutLMv2 for text-classification on RVL-CDIP dataset.
See also: Text classification task guide
Question Answering
A notebook on how to finetune LayoutLMv2 for question-answering on DocVQA dataset.
See also: Question answering task guide
See also: Document question answering task guide
Token Classification
A notebook on how to finetune LayoutLMv2 for token-classification on CORD dataset.
A notebook on how to finetune LayoutLMv2 for token-classification on FUNSD dataset.
See also: Token classification task guide
Usage: LayoutLMv2Processor
The easiest way to prepare data for the model is to use LayoutLMv2Processor, which internally
combines a image processor (LayoutLMv2ImageProcessor) and a tokenizer
(LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast). The image processor
handles the image modality, while the tokenizer handles the text modality. A processor combines both, which is ideal
for a multi-modal model like LayoutLMv2. Note that you can still use both separately, if you only want to handle one
modality.
Copied
from transformers import LayoutLMv2ImageProcessor, LayoutLMv2TokenizerFast, LayoutLMv2Processor
image_processor = LayoutLMv2ImageProcessor() # apply_ocr is set to True by default
tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(image_processor, tokenizer)
In short, one can provide a document image (and possibly additional data) to LayoutLMv2Processor,
and it will create the inputs expected by the model. Internally, the processor first uses
LayoutLMv2ImageProcessor to apply OCR on the image to get a list of words and normalized
bounding boxes, as well to resize the image to a given size in order to get the image input. The words and
normalized bounding boxes are then provided to LayoutLMv2Tokenizer or
LayoutLMv2TokenizerFast, which converts them to token-level input_ids,
attention_mask, token_type_ids, bbox. Optionally, one can provide word labels to the processor,
which are turned into token-level labels.
LayoutLMv2Processor uses PyTesseract, a Python
wrapper around Google’s Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of
choice, and provide the words and normalized boxes yourself. This requires initializing
LayoutLMv2ImageProcessor with apply_ocr set to False.
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
Use case 1: document image classification (training, inference) + token classification (inference), apply_ocr =
True
This is the simplest case, in which the processor (actually the image processor) will perform OCR on the image to get
the words and normalized bounding boxes.
Copied
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
encoding = processor(
image, return_tensors="pt"
) # you can also add all tokenizer parameters here such as padding, truncation
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False
In case one wants to do OCR themselves, one can initialize the image processor with apply_ocr set to
False. In that case, one should provide the words and corresponding (normalized) bounding boxes themselves to
the processor.
Copied
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
Use case 3: token classification (training), apply_ocr=False
For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word
labels in order to train a model. The processor will then convert these into token-level labels. By default, it
will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
ignore_index of PyTorch’s CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with only_label_first_subword set to False.
Copied
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
word_labels = [1, 2]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image'])
Use case 4: visual question answering (inference), apply_ocr=True
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the
processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP].
Copied
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
encoding = processor(image, question, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
Use case 5: visual question answering (inference), apply_ocr=False
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to
perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor.
Copied
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, question, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
LayoutLMv2Config
class transformers.LayoutLMv2Config
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
max_2d_position_embeddings = 1024
max_rel_pos = 128
rel_pos_bins = 32
fast_qkv = True
max_rel_2d_pos = 256
rel_2d_pos_bins = 64
convert_sync_batchnorm = True
image_feature_pool_shape = [7, 7, 256]
coordinate_size = 128
shape_size = 128
has_relative_attention_bias = True
has_spatial_attention_bias = True
has_visual_segment_embedding = False
detectron2_config_args = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the LayoutLMv2 model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling LayoutLMv2Model or TFLayoutLMv2Model.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling LayoutLMv2Model or
TFLayoutLMv2Model.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
max_2d_position_embeddings (int, optional, defaults to 1024) —
The maximum value that the 2D position embedding might ever be used with. Typically set this to something
large just in case (e.g., 1024).
max_rel_pos (int, optional, defaults to 128) —
The maximum number of relative positions to be used in the self-attention mechanism.
rel_pos_bins (int, optional, defaults to 32) —
The number of relative position bins to be used in the self-attention mechanism.
fast_qkv (bool, optional, defaults to True) —
Whether or not to use a single matrix for the queries, keys, values in the self-attention layers.
max_rel_2d_pos (int, optional, defaults to 256) —
The maximum number of relative 2D positions in the self-attention mechanism.
rel_2d_pos_bins (int, optional, defaults to 64) —
The number of 2D relative position bins in the self-attention mechanism.
image_feature_pool_shape (List[int], optional, defaults to [7, 7, 256]) —
The shape of the average-pooled feature map.
coordinate_size (int, optional, defaults to 128) —
Dimension of the coordinate embeddings.
shape_size (int, optional, defaults to 128) —
Dimension of the width and height embeddings.
has_relative_attention_bias (bool, optional, defaults to True) —
Whether or not to use a relative attention bias in the self-attention mechanism.
has_spatial_attention_bias (bool, optional, defaults to True) —
Whether or not to use a spatial attention bias in the self-attention mechanism.
has_visual_segment_embedding (bool, optional, defaults to False) —
Whether or not to add visual segment embeddings.
detectron2_config_args (dict, optional) —
Dictionary containing the configuration arguments of the Detectron2 visual backbone. Refer to this
file
for details regarding default values.
This is the configuration class to store the configuration of a LayoutLMv2Model. It is used to instantiate an
LayoutLMv2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the LayoutLMv2
microsoft/layoutlmv2-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import LayoutLMv2Config, LayoutLMv2Model
# Initializing a LayoutLMv2 microsoft/layoutlmv2-base-uncased style configuration
configuration = LayoutLMv2Config()
# Initializing a model (with random weights) from the microsoft/layoutlmv2-base-uncased style configuration
model = LayoutLMv2Model(configuration)
# Accessing the model configuration
configuration = model.config
LayoutLMv2FeatureExtractor
class transformers.LayoutLMv2FeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
LayoutLMv2ImageProcessor
class transformers.LayoutLMv2ImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
apply_ocr: bool = True
ocr_lang: typing.Optional[str] = None
tesseract_config: typing.Optional[str] = ''
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to (size["height"], size["width"]). Can be
overridden by do_resize in preprocess.
size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after resizing. Can be overridden by size in preprocess.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
apply_ocr (bool, optional, defaults to True) —
Whether to apply the Tesseract OCR engine to get words + normalized bounding boxes. Can be overridden by
apply_ocr in preprocess.
ocr_lang (str, optional) —
The language, specified by its ISO code, to be used by the Tesseract OCR engine. By default, English is
used. Can be overridden by ocr_lang in preprocess.
tesseract_config (str, optional) —
Any additional custom configuration flags that are forwarded to the config parameter when calling
Tesseract. For example: ‘—psm 6’. Can be overridden by tesseract_config in preprocess.
Constructs a LayoutLMv2 image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
apply_ocr: bool = None
ocr_lang: typing.Optional[str] = None
tesseract_config: typing.Optional[str] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Desired size of the output image after resizing.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PIL.Image resampling
filter. Only has an effect if do_resize is set to True.
apply_ocr (bool, optional, defaults to self.apply_ocr) —
Whether to apply the Tesseract OCR engine to get words + normalized bounding boxes.
ocr_lang (str, optional, defaults to self.ocr_lang) —
The language, specified by its ISO code, to be used by the Tesseract OCR engine. By default, English is
used.
tesseract_config (str, optional, defaults to self.tesseract_config) —
Any additional custom configuration flags that are forwarded to the config parameter when calling
Tesseract.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
LayoutLMv2Tokenizer
class transformers.LayoutLMv2Tokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
cls_token_box = [0, 0, 0, 0]
sep_token_box = [1000, 1000, 1000, 1000]
pad_token_box = [0, 0, 0, 0]
pad_token_label = -100
only_label_first_subword = True
tokenize_chinese_chars = True
strip_accents = None
model_max_length: int = 512
additional_special_tokens: typing.Optional[typing.List[str]] = None
**kwargs
)
Construct a LayoutLMv2 tokenizer. Based on WordPiece. LayoutLMv2Tokenizer can be used to turn words, word-level
bounding boxes and optional word labels to token-level input_ids, attention_mask, token_type_ids, bbox, and
optional labels (for token classification).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
LayoutLMv2Tokenizer runs end-to-end tokenization: punctuation splitting and wordpiece. It also turns the
word-level bounding boxes into token-level bounding boxes.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]]
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings
(words of a single example or questions of a batch of examples) or a list of list of strings (batch of
words).
text_pair (List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence should be a list of strings
(pretokenized string).
boxes (List[List[int]], List[List[List[int]]]) —
Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale.
word_labels (List[int], List[List[int]], optional) —
Word-level integer labels (for token classification tasks such as FUNSD, CORD).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
bbox — List of bounding boxes to be fed to a model.
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
labels — List of labels to be fed to a model. (when word_labels is specified).
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True).
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LayoutLMv2TokenizerFast
class transformers.LayoutLMv2TokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
cls_token_box = [0, 0, 0, 0]
sep_token_box = [1000, 1000, 1000, 1000]
pad_token_box = [0, 0, 0, 0]
pad_token_label = -100
only_label_first_subword = True
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
cls_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [CLS] token.
sep_token_box (List[int], optional, defaults to [1000, 1000, 1000, 1000]) —
The bounding box to use for the special [SEP] token.
pad_token_box (List[int], optional, defaults to [0, 0, 0, 0]) —
The bounding box to use for the special [PAD] token.
pad_token_label (int, optional, defaults to -100) —
The label to use for padding tokens. Defaults to -100, which is the ignore_index of PyTorch’s
CrossEntropyLoss.
only_label_first_subword (bool, optional, defaults to True) —
Whether or not to only label the first subword, in case word labels are provided.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original LayoutLMv2).
Construct a “fast” LayoutLMv2 tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]]
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string, a list of strings
(words of a single example or questions of a batch of examples) or a list of list of strings (batch of
words).
text_pair (List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence should be a list of strings
(pretokenized string).
boxes (List[List[int]], List[List[List[int]]]) —
Word-level bounding boxes. Each bounding box should be normalized to be on a 0-1000 scale.
word_labels (List[int], List[List[int]], optional) —
Word-level integer labels (for token classification tasks such as FUNSD, CORD).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
bbox — List of bounding boxes to be fed to a model.
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
labels — List of labels to be fed to a model. (when word_labels is specified).
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True).
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences with word-level normalized bounding boxes and optional labels.
LayoutLMv2Processor
class transformers.LayoutLMv2Processor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (LayoutLMv2ImageProcessor) —
An instance of LayoutLMv2ImageProcessor. The image processor is a required input.
tokenizer (LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast) —
An instance of LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast. The tokenizer is a required input.
Constructs a LayoutLMv2 processor which combines a LayoutLMv2 image processor and a LayoutLMv2 tokenizer into a
single processor.
LayoutLMv2Processor offers all the functionalities you need to prepare data for the model.
It first uses LayoutLMv2ImageProcessor to resize document images to a fixed size, and optionally applies OCR to
get words and normalized bounding boxes. These are then provided to LayoutLMv2Tokenizer or
LayoutLMv2TokenizerFast, which turns the words and bounding boxes into token-level input_ids,
attention_mask, token_type_ids, bbox. Optionally, one can provide integer word_labels, which are turned
into token-level labels for token classification tasks (such as FUNSD, CORD).
__call__
<
source
>
(
images
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
boxes: typing.Union[typing.List[typing.List[int]], typing.List[typing.List[typing.List[int]]]] = None
word_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = False
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
This method first forwards the images argument to call(). In case
LayoutLMv2ImageProcessor was initialized with apply_ocr set to True, it passes the obtained words and
bounding boxes along with the additional arguments to call() and returns the output,
together with resized images. In case LayoutLMv2ImageProcessor was initialized with apply_ocr set to
False, it passes the words (text/text_pair`) and `boxes` specified by the user along with the additional arguments to [__call__()](/docs/transformers/v4.31.0/en/model_doc/layoutlmv2#transformers.LayoutLMv2Tokenizer.__call__) and returns the output, together with resized `images.
Please refer to the docstring of the above two methods for more information.
LayoutLMv2Model
class transformers.LayoutLMv2Model
<
source
>
(
config
)
Parameters
config (LayoutLMv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LayoutLMv2 Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
image: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape ((batch_size, sequence_length), 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
image (torch.FloatTensor of shape (batch_size, num_channels, height, width) or detectron.structures.ImageList whose tensors is of shape (batch_size, num_channels, height, width)) —
Batch of document images.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, LayoutLMv2Model, set_seed
from PIL import Image
import torch
from datasets import load_dataset
set_seed(88)
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
model = LayoutLMv2Model.from_pretrained("microsoft/layoutlmv2-base-uncased")
dataset = load_dataset("hf-internal-testing/fixtures_docvqa")
image_path = dataset["test"][0]["file"]
image = Image.open(image_path).convert("RGB")
encoding = processor(image, return_tensors="pt")
outputs = model(**encoding)
last_hidden_states = outputs.last_hidden_state
last_hidden_states.shape
torch.Size([1, 342, 768])
LayoutLMv2ForSequenceClassification
class transformers.LayoutLMv2ForSequenceClassification
<
source
>
(
config
)
Parameters
config (LayoutLMv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv2 Model with a sequence classification head on top (a linear layer on top of the concatenation of the
final hidden state of the [CLS] token, average-pooled initial visual embeddings and average-pooled final visual
embeddings, e.g. for document image classification tasks such as the
RVL-CDIP dataset.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
image: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape batch_size, sequence_length) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
image (torch.FloatTensor of shape (batch_size, num_channels, height, width) or detectron.structures.ImageList whose tensors is of shape (batch_size, num_channels, height, width)) —
Batch of document images.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv2ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, LayoutLMv2ForSequenceClassification, set_seed
from PIL import Image
import torch
from datasets import load_dataset
set_seed(88)
dataset = load_dataset("rvl_cdip", split="train", streaming=True)
data = next(iter(dataset))
image = data["image"].convert("RGB")
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
model = LayoutLMv2ForSequenceClassification.from_pretrained(
... "microsoft/layoutlmv2-base-uncased", num_labels=dataset.info.features["label"].num_classes
... )
encoding = processor(image, return_tensors="pt")
sequence_label = torch.tensor([data["label"]])
outputs = model(**encoding, labels=sequence_label)
loss, logits = outputs.loss, outputs.logits
predicted_idx = logits.argmax(dim=-1).item()
predicted_answer = dataset.info.features["label"].names[4]
predicted_idx, predicted_answer
(4, 'advertisement')
LayoutLMv2ForTokenClassification
class transformers.LayoutLMv2ForTokenClassification
<
source
>
(
config
)
Parameters
config (LayoutLMv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv2 Model with a token classification head on top (a linear layer on top of the text part of the hidden
states) e.g. for sequence labeling (information extraction) tasks such as
FUNSD, SROIE,
CORD and Kleister-NDA.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
image: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape batch_size, sequence_length) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
image (torch.FloatTensor of shape (batch_size, num_channels, height, width) or detectron.structures.ImageList whose tensors is of shape (batch_size, num_channels, height, width)) —
Batch of document images.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv2ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, LayoutLMv2ForTokenClassification, set_seed
from PIL import Image
from datasets import load_dataset
set_seed(88)
datasets = load_dataset("nielsr/funsd", split="test")
labels = datasets.features["ner_tags"].feature.names
id2label = {v: k for v, k in enumerate(labels)}
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
model = LayoutLMv2ForTokenClassification.from_pretrained(
... "microsoft/layoutlmv2-base-uncased", num_labels=len(labels)
... )
data = datasets[0]
image = Image.open(data["image_path"]).convert("RGB")
words = data["words"]
boxes = data["bboxes"] # make sure to normalize your bounding boxes
word_labels = data["ner_tags"]
encoding = processor(
... image,
... words,
... boxes=boxes,
... word_labels=word_labels,
... padding="max_length",
... truncation=True,
... return_tensors="pt",
... )
outputs = model(**encoding)
logits, loss = outputs.logits, outputs.loss
predicted_token_class_ids = logits.argmax(-1)
predicted_tokens_classes = [id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes[:5]
['B-ANSWER', 'B-HEADER', 'B-HEADER', 'B-HEADER', 'B-HEADER']
LayoutLMv2ForQuestionAnswering
class transformers.LayoutLMv2ForQuestionAnswering
<
source
>
(
config
has_visual_segment_embedding = True
)
Parameters
config (LayoutLMv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLMv2 Model with a span classification head on top for extractive question-answering tasks such as
DocVQA (a linear layer on top of the text part of the hidden-states output to
compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
image: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape batch_size, sequence_length) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner.
image (torch.FloatTensor of shape (batch_size, num_channels, height, width) or detectron.structures.ImageList whose tensors is of shape (batch_size, num_channels, height, width)) —
Batch of document images.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMv2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMv2ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
In this example below, we give the LayoutLMv2 model an image (of texts) and ask it a question. It will give us
a prediction of what it thinks the answer is (the span of the answer within the texts parsed from the image).
Copied
from transformers import AutoProcessor, LayoutLMv2ForQuestionAnswering, set_seed
import torch
from PIL import Image
from datasets import load_dataset
set_seed(88)
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
model = LayoutLMv2ForQuestionAnswering.from_pretrained("microsoft/layoutlmv2-base-uncased")
dataset = load_dataset("hf-internal-testing/fixtures_docvqa")
image_path = dataset["test"][0]["file"]
image = Image.open(image_path).convert("RGB")
question = "When is coffee break?"
encoding = processor(image, question, return_tensors="pt")
outputs = model(**encoding)
predicted_start_idx = outputs.start_logits.argmax(-1).item()
predicted_end_idx = outputs.end_logits.argmax(-1).item()
predicted_start_idx, predicted_end_idx
(154, 287)
predicted_answer_tokens = encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]
predicted_answer = processor.tokenizer.decode(predicted_answer_tokens)
predicted_answer # results are not very good without further fine-tuning
'council mem - bers conducted by trrf treasurer philip g. kuehn to get answers which the public ...
Copied
target_start_index = torch.tensor([7])
target_end_index = torch.tensor([14])
outputs = model(**encoding, start_positions=target_start_index, end_positions=target_end_index)
predicted_answer_span_start = outputs.start_logits.argmax(-1).item()
predicted_answer_span_end = outputs.end_logits.argmax(-1).item()
predicted_answer_span_start, predicted_answer_span_end
(154, 287)
←LayoutLM
LayoutLMV3→
LayoutLMV2
Overview
Resources
Usage: LayoutLMv2Processor
LayoutLMv2Config
LayoutLMv2FeatureExtractor
LayoutLMv2ImageProcessor
LayoutLMv2Tokenizer
LayoutLMv2TokenizerFast
LayoutLMv2Processor
LayoutLMv2Model
LayoutLMv2ForSequenceClassification
LayoutLMv2ForTokenClassification
LayoutLMv2ForQuestionAnswering
|
ResNet
Overview
The ResNet model was proposed in Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by Nvidia, we apply the stride=2 for downsampling in bottleneck’s 3x3 conv and not in the first 1x1. This is generally known as “ResNet v1.5”.
ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision.
The abstract from the paper is the following:
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Tips:
One can use AutoImageProcessor to prepare images for the model.
The figure below illustrates the architecture of ResNet. Taken from the original paper.
This model was contributed by Francesco. The TensorFlow version of this model was added by amyeroberts. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet.
Image Classification
ResNetForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ResNetConfig
class transformers.ResNetConfig
<
source
>
(
num_channels = 3
embedding_size = 64
hidden_sizes = [256, 512, 1024, 2048]
depths = [3, 4, 6, 3]
layer_type = 'bottleneck'
hidden_act = 'relu'
downsample_in_first_stage = False
out_features = None
out_indices = None
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
embedding_size (int, optional, defaults to 64) —
Dimensionality (hidden size) for the embedding layer.
hidden_sizes (List[int], optional, defaults to [256, 512, 1024, 2048]) —
Dimensionality (hidden size) at each stage.
depths (List[int], optional, defaults to [3, 4, 6, 3]) —
Depth (number of layers) for each stage.
layer_type (str, optional, defaults to "bottleneck") —
The layer to use, it can be either "basic" (used for smaller models, like resnet-18 or resnet-34) or
"bottleneck" (used for larger models like resnet-50 and above).
hidden_act (str, optional, defaults to "relu") —
The non-linear activation function in each block. If string, "gelu", "relu", "selu" and "gelu_new"
are supported.
downsample_in_first_stage (bool, optional, defaults to False) —
If True, the first stage will downsample the inputs using a stride of 2.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a ResNetModel. It is used to instantiate an
ResNet model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ResNet
microsoft/resnet-50 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ResNetConfig, ResNetModel
# Initializing a ResNet resnet-50 style configuration
configuration = ResNetConfig()
# Initializing a model (with random weights) from the resnet-50 style configuration
model = ResNetModel(configuration)
# Accessing the model configuration
configuration = model.config
ResNetModel
class transformers.ResNetModel
<
source
>
(
config
)
Parameters
config (ResNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ResNet model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: Tensor
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ResNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The ResNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ResNetModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = ResNetModel.from_pretrained("microsoft/resnet-50")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 2048, 7, 7]
ResNetForImageClassification
class transformers.ResNetForImageClassification
<
source
>
(
config
)
Parameters
config (ResNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ResNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ResNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The ResNetForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ResNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tiger cat
TFResNetModel
class transformers.TFResNetModel
<
source
>
(
*args
**kwargs
)
Parameters
config (ResNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ResNet model outputting raw features without any specific head on top.
This model is a TensorFlow
tf.keras.layers.Layer sub-class. Use it as a
regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: Tensor
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndNoAttention or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndNoAttention or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndNoAttention or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ResNetConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The TFResNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFResNetModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = TFResNetModel.from_pretrained("microsoft/resnet-50")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 2048, 7, 7]
TFResNetForImageClassification
class transformers.TFResNetForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ResNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ResNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a TensorFlow
tf.keras.layers.Layer sub-class. Use it as a
regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: Tensor = None
labels: Tensor = None
output_hidden_states: bool = None
return_dict: bool = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ResNetConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called
feature maps) of the model at the output of each stage.
The TFResNetForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFResNetForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = TFResNetForImageClassification.from_pretrained("microsoft/resnet-50")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
tiger cat
FlaxResNetModel
class transformers.FlaxResNetModel
<
source
>
(
config: ResNetConfig
input_shape = (1, 224, 224, 3)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (ResNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare ResNet model outputting raw features without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
params: dict = None
train: bool = False
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.resnet.configuration_resnet.ResNetConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings, if the model has an embedding layer, + one
for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the
model at the output of each layer plus the optional initial embedding outputs.
The FlaxResNetPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, FlaxResNetModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = FlaxResNetModel.from_pretrained("microsoft/resnet-50")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxResNetForImageClassification
class transformers.FlaxResNetForImageClassification
<
source
>
(
config: ResNetConfig
input_shape = (1, 224, 224, 3)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (ResNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
ResNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
params: dict = None
train: bool = False
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.resnet.configuration_resnet.ResNetConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when
config.output_hidden_states=True):
Tuple of jnp.ndarray (one for the output of the embeddings, if the model has an embedding layer, + one
for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The FlaxResNetPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FlaxResNetForImageClassification
from PIL import Image
import jax
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = FlaxResNetForImageClassification.from_pretrained("microsoft/resnet-50")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = jax.numpy.argmax(logits, axis=-1)
print("Predicted class:", model.config.id2label[predicted_class_idx.item()])
←RegNet
SegFormer→
ResNet
Overview
Resources
ResNetConfig
ResNetModel
ResNetForImageClassification
TFResNetModel
TFResNetForImageClassification
FlaxResNetModel
FlaxResNetForImageClassification
|
GPT-NeoX
Overview
We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will
be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge,
the largest dense autoregressive model that has publicly available weights at the time of submission. In this work,
we describe GPT-NeoX-20B’s architecture and training and evaluate its performance on a range of language-understanding,
mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and
gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source
the training and evaluation code, as well as the model weights, at https://github.com/EleutherAI/gpt-neox.
Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with
generous the support of CoreWeave.
GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows:
Copied
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda()
GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates
additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation.
Generation
The generate() method can be used to generate text using GPT Neo model.
Copied
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Documentation resources
Causal language modeling task guide
GPTNeoXConfig
class transformers.GPTNeoXConfig
<
source
>
(
vocab_size = 50432
hidden_size = 6144
num_hidden_layers = 44
num_attention_heads = 64
intermediate_size = 24576
hidden_act = 'gelu'
rotary_pct = 0.25
rotary_emb_base = 10000
attention_dropout = 0.0
hidden_dropout = 0.0
classifier_dropout = 0.1
max_position_embeddings = 2048
initializer_range = 0.02
layer_norm_eps = 1e-05
use_cache = True
bos_token_id = 0
eos_token_id = 2
tie_word_embeddings = False
use_parallel_residual = True
rope_scaling = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50432) —
Vocabulary size of the GPTNeoX model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling GPTNeoXModel.
hidden_size (int, optional, defaults to 6144) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 44) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 64) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 24576) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
rotary_pct (float, optional, defaults to 0.25) —
percentage of hidden dimensions to allocate to rotary embeddings
rotary_emb_base (int, optional, defaults to 10000) —
base for computing rotary embeddings frequency
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio probability of the attention score.
hidden_dropout (float, optional, defaults to 0.0) —
The dropout ratio of (1) the word embeddings, (2) the post-attention hidden states, and (3) the post-mlp
hidden states.
classifier_dropout (float, optional, defaults to 0.1) —
Argument used when doing token classification, used in the model GPTNeoXForTokenClassification.
The dropout ratio for the hidden layer.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 1e-5) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
use_parallel_residual (bool, optional, defaults to True) —
Whether to use a “parallel” formulation in each Transformer layer, which can provide a slight training
speedup at large scales (e.g. 20B).
rope_scaling (Dict, optional) —
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports three scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is {"type": strategy name, "factor": scaling factor}. When using this flag, don’t update
max_position_embeddings to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
Example —
This is the configuration class to store the configuration of a GPTNeoXModel. It is used to instantiate an
GPTNeoX model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the GPTNeoX
EleutherAI/gpt-neox-20b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import GPTNeoXConfig, GPTNeoXModel
# Initializing a GPTNeoX gpt-neox-20b style configuration
configuration = GPTNeoXConfig()
# Initializing a model (with random weights) from the gpt-neox-20b style configuration
model = GPTNeoXModel(configuration)
# Accessing the model configuration
configuration = model.config
GPTNeoXTokenizerFast
class transformers.GPTNeoXTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|endoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (GPTNeoX tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
Construct a “fast” GPT-NeoX-20B tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import GPTNeoXTokenizerFast
tokenizer = GPTNeoXTokenizerFast.from_pretrained("gpt2")
tokenizer("Hello world")["input_ids"]
[15496, 995]
tokenizer(" Hello world")["input_ids"]
[18435, 995]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
GPTNeoXModel
class transformers.GPTNeoXModel
<
source
>
(
config
)
Parameters
config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPTNeoX Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-neox-20b instead of trl-internal-testing/tiny-random-GPTNeoXForCausalLM. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoXModel
import torch
tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
model = GPTNeoXModel.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
GPTNeoXForCausalLM
class transformers.GPTNeoXForCausalLM
<
source
>
(
config
)
Parameters
config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GPTNeoX Model with a language modeling head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are
only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks that can be used (see
past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoXForCausalLM, GPTNeoXConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
config = GPTNeoXConfig.from_pretrained("EleutherAI/gpt-neox-20b")
config.is_decoder = True
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
GPTNeoXForQuestionAnswering
class transformers.GPTNeoXForQuestionAnswering
<
source
>
(
config
)
Parameters
config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-NeoX Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-neox-20b instead of trl-internal-testing/tiny-random-GPTNeoXForCausalLM. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoXForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
model = GPTNeoXForQuestionAnswering.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
GPTNeoXForSequenceClassification
class transformers.GPTNeoXForSequenceClassification
<
source
>
(
config
)
Parameters
config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPTNeoX Model transformer with a sequence classification head on top (linear layer).
GPTNeoXForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape ({0}), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPTNeoXForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
model = GPTNeoXForSequenceClassification.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPTNeoXForSequenceClassification.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPTNeoXForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM")
model = GPTNeoXForSequenceClassification.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPTNeoXForSequenceClassification.from_pretrained(
... "trl-internal-testing/tiny-random-GPTNeoXForCausalLM", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
GPTNeoXForTokenClassification
class transformers.GPTNeoXForTokenClassification
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape ({0}), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoXForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("LarsJonasson/pythia-410m-deduped-sft-swedish")
model = GPTNeoXForTokenClassification.from_pretrained("LarsJonasson/pythia-410m-deduped-sft-swedish")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.25
←GPT Neo
GPT NeoX Japanese→
GPT-NeoX
Overview
Generation
Documentation resources
GPTNeoXConfig
GPTNeoXTokenizerFast
GPTNeoXModel
GPTNeoXForCausalLM
GPTNeoXForQuestionAnswering
GPTNeoXForSequenceClassification
GPTNeoXForTokenClassification
|
RoBERTa-PreLayerNorm
Overview
The RoBERTa-PreLayerNorm model was proposed in fairseq: A Fast, Extensible Toolkit for Sequence Modeling by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
It is identical to using the --encoder-normalize-before flag in fairseq.
The abstract from the paper is the following:
fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.
Tips:
The implementation is the same as Roberta except instead of using Add and Norm it does Norm and Add. Add and Norm refers to the Addition and LayerNormalization as described in Attention Is All You Need.
This is identical to using the --encoder-normalize-before flag in fairseq.
This model was contributed by andreasmaden.
The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RobertaPreLayerNormConfig
class transformers.RobertaPreLayerNormConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the RoBERTa-PreLayerNorm model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling RobertaPreLayerNormModel or
TFRobertaPreLayerNormModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling RobertaPreLayerNormModel or
TFRobertaPreLayerNormModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a RobertaPreLayerNormModel or a
TFRobertaPreLayerNormModel. It is used to instantiate a RoBERTa-PreLayerNorm model according to the specified
arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar
configuration to that of the RoBERTa-PreLayerNorm
andreasmadsen/efficient_mlm_m0.40 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import RobertaPreLayerNormConfig, RobertaPreLayerNormModel
# Initializing a RoBERTa-PreLayerNorm configuration
configuration = RobertaPreLayerNormConfig()
# Initializing a model (with random weights) from the configuration
model = RobertaPreLayerNormModel(configuration)
# Accessing the model configuration
configuration = model.config
RobertaPreLayerNormModel
class transformers.RobertaPreLayerNormModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoBERTa-PreLayerNorm Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The RobertaPreLayerNormModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaPreLayerNormModel
import torch
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormModel.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
RobertaPreLayerNormForCausalLM
class transformers.RobertaPreLayerNormForCausalLM
<
source
>
(
config
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The RobertaPreLayerNormForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaPreLayerNormForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
config = AutoConfig.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
config.is_decoder = True
model = RobertaPreLayerNormForCausalLM.from_pretrained("andreasmadsen/efficient_mlm_m0.40", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
RobertaPreLayerNormForMaskedLM
class transformers.RobertaPreLayerNormForMaskedLM
<
source
>
(
config
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaPreLayerNormForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaPreLayerNormForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormForMaskedLM.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.69
RobertaPreLayerNormForSequenceClassification
class transformers.RobertaPreLayerNormForSequenceClassification
<
source
>
(
config
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model transformer with a sequence classification/regression head on top (a linear layer on top
of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaPreLayerNormForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, RobertaPreLayerNormForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormForSequenceClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RobertaPreLayerNormForSequenceClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, RobertaPreLayerNormForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormForSequenceClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RobertaPreLayerNormForSequenceClassification.from_pretrained(
... "andreasmadsen/efficient_mlm_m0.40", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
RobertaPreLayerNormForMultipleChoice
class transformers.RobertaPreLayerNormForMultipleChoice
<
source
>
(
config
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a multiple choice classification head on top (a linear layer on top of the pooled
output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaPreLayerNormForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaPreLayerNormForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormForMultipleChoice.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
RobertaPreLayerNormForTokenClassification
class transformers.RobertaPreLayerNormForTokenClassification
<
source
>
(
config
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaPreLayerNormForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaPreLayerNormForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormForTokenClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
RobertaPreLayerNormForQuestionAnswering
class transformers.RobertaPreLayerNormForQuestionAnswering
<
source
>
(
config
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a span classification head on top for extractive question-answering tasks like SQuAD
(a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0,1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
This parameter can only be used when the model is initialized with type_vocab_size parameter with value
= 2. All the value in this tensor should be always < type_vocab_size.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RobertaPreLayerNormForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaPreLayerNormForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = RobertaPreLayerNormForQuestionAnswering.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFRobertaPreLayerNormModel
class transformers.TFRobertaPreLayerNormModel
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoBERTa-PreLayerNorm Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFRobertaPreLayerNormModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormModel.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFRobertaPreLayerNormForCausalLM
class transformers.TFRobertaPreLayerNormForCausalLM
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The TFRobertaPreLayerNormForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormForCausalLM.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFRobertaPreLayerNormForMaskedLM
class transformers.TFRobertaPreLayerNormForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaPreLayerNormForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormForMaskedLM.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-<mask> tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.69
TFRobertaPreLayerNormForSequenceClassification
class transformers.TFRobertaPreLayerNormForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model transformer with a sequence classification/regression head on top (a linear layer on top
of the pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaPreLayerNormForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormForSequenceClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFRobertaPreLayerNormForSequenceClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFRobertaPreLayerNormForMultipleChoice
class transformers.TFRobertaPreLayerNormForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a multiple choice classification head on top (a linear layer on top of the pooled
output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaPreLayerNormForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormForMultipleChoice.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFRobertaPreLayerNormForTokenClassification
class transformers.TFRobertaPreLayerNormForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaPreLayerNormForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormForTokenClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFRobertaPreLayerNormForQuestionAnswering
class transformers.TFRobertaPreLayerNormForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RobertaPreLayerNormConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRobertaPreLayerNormForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRobertaPreLayerNormForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = TFRobertaPreLayerNormForQuestionAnswering.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
FlaxRobertaPreLayerNormModel
class transformers.FlaxRobertaPreLayerNormModel
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoBERTa-PreLayerNorm Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormModel
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormModel.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxRobertaPreLayerNormForCausalLM
class transformers.FlaxRobertaPreLayerNormForCausalLM
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a language modeling head on top (a linear layer on top of the hidden-states output)
e.g for autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormForCausalLM
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormForCausalLM.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
FlaxRobertaPreLayerNormForMaskedLM
class transformers.FlaxRobertaPreLayerNormForMaskedLM
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoBERTa-PreLayerNorm Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormForMaskedLM.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRobertaPreLayerNormForSequenceClassification
class transformers.FlaxRobertaPreLayerNormForSequenceClassification
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model transformer with a sequence classification/regression head on top (a linear layer on top
of the pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormForSequenceClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRobertaPreLayerNormForMultipleChoice
class transformers.FlaxRobertaPreLayerNormForMultipleChoice
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a multiple choice classification head on top (a linear layer on top of the pooled
output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormForMultipleChoice.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxRobertaPreLayerNormForTokenClassification
class transformers.FlaxRobertaPreLayerNormForTokenClassification
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormForTokenClassification.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRobertaPreLayerNormForQuestionAnswering
class transformers.FlaxRobertaPreLayerNormForQuestionAnswering
<
source
>
(
config: RobertaPreLayerNormConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (RobertaPreLayerNormConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RobertaPreLayerNorm Model with a span classification head on top for extractive question-answering tasks like SQuAD
(a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RobertaPreLayerNormConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRobertaPreLayerNormPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRobertaPreLayerNormForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
model = FlaxRobertaPreLayerNormForQuestionAnswering.from_pretrained("andreasmadsen/efficient_mlm_m0.40")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←RoBERTa
RoCBert→
RoBERTa-PreLayerNorm
Overview
Documentation resources
RobertaPreLayerNormConfig
RobertaPreLayerNormModel
RobertaPreLayerNormForCausalLM
RobertaPreLayerNormForMaskedLM
RobertaPreLayerNormForSequenceClassification
RobertaPreLayerNormForMultipleChoice
RobertaPreLayerNormForTokenClassification
RobertaPreLayerNormForQuestionAnswering
TFRobertaPreLayerNormModel
TFRobertaPreLayerNormForCausalLM
TFRobertaPreLayerNormForMaskedLM
TFRobertaPreLayerNormForSequenceClassification
TFRobertaPreLayerNormForMultipleChoice
TFRobertaPreLayerNormForTokenClassification
TFRobertaPreLayerNormForQuestionAnswering
FlaxRobertaPreLayerNormModel
FlaxRobertaPreLayerNormForCausalLM
FlaxRobertaPreLayerNormForMaskedLM
FlaxRobertaPreLayerNormForSequenceClassification
FlaxRobertaPreLayerNormForMultipleChoice
FlaxRobertaPreLayerNormForTokenClassification
FlaxRobertaPreLayerNormForQuestionAnswering
|
with Donut is by checking the tutorial
notebooks, which show how to use the model
at inference time as well as fine-tuning on custom data.
Donut is always used within the VisionEncoderDecoder framework.
Inference
Donut’s VisionEncoderDecoder model accepts images as input and makes use of
generate() to autoregressively generate text given the input image.
The DonutImageProcessor class is responsible for preprocessing the input image and
[XLMRobertaTokenizer/XLMRobertaTokenizerFast] decodes the generated target tokens to the target string. The
DonutProcessor wraps DonutImageProcessor and [XLMRobertaTokenizer/XLMRobertaTokenizerFast]
into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step Document Image Classification
Copied
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# load document image
dataset = load_dataset("hf-internal-testing/example-documents", split="test")
image = dataset[1]["image"]
# prepare decoder inputs
task_prompt = "<s_rvlcdip>"
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
... pixel_values.to(device),
... decoder_input_ids=decoder_input_ids.to(device),
... max_length=model.decoder.config.max_position_embeddings,
... early_stopping=True,
... pad_token_id=processor.tokenizer.pad_token_id,
... eos_token_id=processor.tokenizer.eos_token_id,
... use_cache=True,
... num_beams=1,
... bad_words_ids=[[processor.tokenizer.unk_token_id]],
... return_dict_in_generate=True,
... )
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
print(processor.token2json(sequence))
{'class': 'advertisement'}
Step-by-step Document Parsing
Copied
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# load document image
dataset = load_dataset("hf-internal-testing/example-documents", split="test")
image = dataset[2]["image"]
# prepare decoder inputs
task_prompt = "<s_cord-v2>"
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
... pixel_values.to(device),
... decoder_input_ids=decoder_input_ids.to(device),
... max_length=model.decoder.config.max_position_embeddings,
... early_stopping=True,
... pad_token_id=processor.tokenizer.pad_token_id,
... eos_token_id=processor.tokenizer.eos_token_id,
... use_cache=True,
... num_beams=1,
... bad_words_ids=[[processor.tokenizer.unk_token_id]],
... return_dict_in_generate=True,
... )
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
print(processor.token2json(sequence))
{'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}}
Step-by-step Document Visual Question Answering (DocVQA)
Copied
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# load document image from the DocVQA dataset
dataset = load_dataset("hf-internal-testing/example-documents", split="test")
image = dataset[0]["image"]
# prepare decoder inputs
task_prompt = "<s_docvqa><s_question>{user_input}</s_question><s_answer>"
question = "When is the coffee break?"
prompt = task_prompt.replace("{user_input}", question)
decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids
pixel_values = processor(image, return_tensors="pt").pixel_values
outputs = model.generate(
... pixel_values.to(device),
... decoder_input_ids=decoder_input_ids.to(device),
... max_length=model.decoder.config.max_position_embeddings,
... early_stopping=True,
... pad_token_id=processor.tokenizer.pad_token_id,
... eos_token_id=processor.tokenizer.eos_token_id,
... use_cache=True,
... num_beams=1,
... bad_words_ids=[[processor.tokenizer.unk_token_id]],
... return_dict_in_generate=True,
... )
sequence = processor.batch_decode(outputs.sequences)[0]
sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
print(processor.token2json(sequence))
{'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'}
See the model hub to look for Donut checkpoints.
Training
We refer to the tutorial notebooks.
DonutSwinConfig
class transformers.DonutSwinConfig
<
source
>
(
image_size = 224
patch_size = 4
num_channels = 3
embed_dim = 96
depths = [2, 2, 6, 2]
num_heads = [3, 6, 12, 24]
window_size = 7
mlp_ratio = 4.0
qkv_bias = True
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
drop_path_rate = 0.1
hidden_act = 'gelu'
use_absolute_embeddings = False
initializer_range = 0.02
layer_norm_eps = 1e-05
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 96) —
Dimensionality of patch embedding.
depths (list(int), optional, defaults to [2, 2, 6, 2]) —
Depth of each layer in the Transformer encoder.
num_heads (list(int), optional, defaults to [3, 6, 12, 24]) —
Number of attention heads in each layer of the Transformer encoder.
window_size (int, optional, defaults to 7) —
Size of windows.
mlp_ratio (float, optional, defaults to 4.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (bool, optional, defaults to True) —
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
use_absolute_embeddings (bool, optional, defaults to False) —
Whether or not to add absolute position embeddings to the patch embeddings.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a DonutSwinModel. It is used to instantiate a
Donut model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Donut
naver-clova-ix/donut-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import DonutSwinConfig, DonutSwinModel
# Initializing a Donut naver-clova-ix/donut-base style configuration
configuration = DonutSwinConfig()
# Randomly initializing a model from the naver-clova-ix/donut-base style configuration
model = DonutSwinModel(configuration)
# Accessing the model configuration
configuration = model.config
DonutImageProcessor
class transformers.DonutImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_thumbnail: bool = True
do_align_long_axis: bool = False
do_pad: bool = True
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method.
do_thumbnail (bool, optional, defaults to True) —
Whether to resize the image using thumbnail method.
do_align_long_axis (bool, optional, defaults to False) —
Whether to align the long axis of the image with the long axis of size by rotating by 90 degrees.
do_pad (bool, optional, defaults to True) —
Whether to pad the image. If random_padding is set to True in preprocess, each image is padded with a
random amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are
padded to the largest image size in the batch.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in
the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess
method.
do_normalize —
Whether to normalize the image. Can be overridden by do_normalize in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Image standard deviation.
Constructs a Donut image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_thumbnail: bool = None
do_align_long_axis: bool = None
do_pad: bool = None
random_padding: bool = False
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to min(size[“height”],
size[“width”]) with the longest edge resized to keep the input aspect ratio.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only
has an effect if do_resize is set to True.
do_thumbnail (bool, optional, defaults to self.do_thumbnail) —
Whether to resize the image using thumbnail method.
do_align_long_axis (bool, optional, defaults to self.do_align_long_axis) —
Whether to align the long axis of the image with the long axis of size by rotating by 90 degrees.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image. If random_padding is set to True, each image is padded with a random
amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are
padded to the largest image size in the batch.
random_padding (bool, optional, defaults to self.random_padding) —
Whether to use random padding when padding the image. If True, each image in the batch with be padded
with a random amount of padding on each side up to the size of the largest image in the batch.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image pixel values.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use for normalization.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use for normalization.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: defaults to the channel dimension format of the input image.
Preprocess an image or batch of images.
DonutFeatureExtractor
class transformers.DonutFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
DonutProcessor
class transformers.DonutProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (DonutImageProcessor) —
An instance of DonutImageProcessor. The image processor is a required input.
tokenizer ([XLMRobertaTokenizer/XLMRobertaTokenizerFast]) —
An instance of [XLMRobertaTokenizer/XLMRobertaTokenizerFast]. The tokenizer is a required input.
Constructs a Donut processor which wraps a Donut image processor and an XLMRoBERTa tokenizer into a single
processor.
DonutProcessor offers all the functionalities of DonutImageProcessor and
[XLMRobertaTokenizer/XLMRobertaTokenizerFast]. See the call() and
decode() for more information.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to AutoImageProcessor’s
__call__() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to DonutTokenizer’s
~DonutTokenizer.__call__. Please refer to the doctsring of the above two methods for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to DonutTokenizer’s batch_decode(). Please refer
to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to DonutTokenizer’s decode(). Please refer to the
docstring of this method for more information.
DonutSwinModel
class transformers.DonutSwinModel
<
source
>
(
config
add_pooling_layer = True
use_mask_token = False
)
Parameters
config (DonutSwinConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Donut Swin Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DonutImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput or tuple(torch.FloatTensor)
A transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DonutSwinConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The DonutSwinModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, DonutSwinModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("https://huggingface.co/naver-clova-ix/donut-base")
model = DonutSwinModel.from_pretrained("https://huggingface.co/naver-clova-ix/donut-base")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 49, 768]
←DePlot
FLAVA→
Donut
Overview
Inference
Training
DonutSwinConfig
DonutImageProcessor
DonutFeatureExtractor
DonutProcessor
DonutSwinModel
|
LLaMA
Overview
The LLaMA model was proposed in LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters.
The abstract from the paper is the following:
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.
Tips:
Weights for the LLaMA models can be obtained from by filling out this form
After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the conversion script. The script can be called with the following (example) command:
Copied
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
Copied
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it’s thus 130GB of RAM needed.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. “Banana”), the tokenizer does not prepend the prefix space to the string.
This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
Based on the original LLaMA model, Meta AI has released some follow-up works:
Llama2: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found here.
LlamaConfig
class transformers.LlamaConfig
<
source
>
(
vocab_size = 32000
hidden_size = 4096
intermediate_size = 11008
num_hidden_layers = 32
num_attention_heads = 32
num_key_value_heads = None
hidden_act = 'silu'
max_position_embeddings = 2048
initializer_range = 0.02
rms_norm_eps = 1e-06
use_cache = True
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
pretraining_tp = 1
tie_word_embeddings = False
rope_scaling = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32000) —
Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LlamaModel
hidden_size (int, optional, defaults to 4096) —
Dimension of the hidden representations.
intermediate_size (int, optional, defaults to 11008) —
Dimension of the MLP representations.
num_hidden_layers (int, optional, defaults to 32) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (int, optional) —
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
num_key_value_heads=num_attention_heads, the model will use Multi Head Attention (MHA), if
num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to num_attention_heads`.
pretraining_tp (int, optional, defaults to 1) —
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to this
document to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to this
issue.
hidden_act (str or function, optional, defaults to "silu") —
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the rms normalization layers.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
tie_word_embeddings(bool, optional, defaults to False) —
Whether to tie weight embeddings
rope_scaling (Dict, optional) —
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports three scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is {"type": strategy name, "factor": scaling factor}. When using this flag, don’t update
max_position_embeddings to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
Example —
This is the configuration class to store the configuration of a LlamaModel. It is used to instantiate an LLaMA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LLaMA-7B.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import LlamaModel, LlamaConfig
# Initializing a LLaMA llama-7b style configuration
configuration = LlamaConfig()
# Initializing a model from the llama-7b style configuration
model = LlamaModel(configuration)
# Accessing the model configuration
configuration = model.config
LlamaTokenizer
class transformers.LlamaTokenizer
<
source
>
(
vocab_file
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
pad_token = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
add_bos_token = True
add_eos_token = False
clean_up_tokenization_spaces = False
legacy = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
legacy (bool, optional, defaults to True) —
Whether or not the legacy behaviour of the tokenizer should be used. Legacy is before the merge of #24622
which includes fixes to properly handle tokens that appear after special tokens. A simple example:
legacy=True:
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is
no padding token in the original model.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory
filename_prefix: typing.Optional[str] = None
)
→
Tuple(str)
Parameters
save_directory (str) —
The directory in which to save the vocabulary.
Returns
Tuple(str)
Paths to the files saved.
Save the vocabulary and special tokens file to a directory.
LlamaTokenizerFast
class transformers.LlamaTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
clean_up_tokenization_spaces = False
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
add_bos_token = True
add_eos_token = False
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .model extension) that
contains the vocabulary necessary to instantiate a tokenizer.
tokenizer_file (str) —
tokenizers file (generally has a .json extension) that
contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (str, optional, defaults to False) —
Wether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra
spaces.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no normalization.
Copied
from transformers import LlamaTokenizerFast
tokenizer = LlaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer")
tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]
If you want to change the bos_token or the eos_token, make sure to specify them when initializing the model, or
call tokenizer.update_post_processor() to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The model input with special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
A list of integers in the range [0, 1]
Parameters
token_ids_0 (List[int]) —
List of ids of the first sequence.
token_ids_1 (List[int], optional) —
List of ids of the second sequence.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model or encode_plus methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
update_post_processor
<
source
>
(
)
Updates the underlying post processor with the current bos_token and eos_token.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LlamaModel
class transformers.LlamaModel
<
source
>
(
config: LlamaConfig
)
Parameters
config (LlamaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
config — LlamaConfig
The bare LLaMA Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Transformer decoder consisting of config.num_hidden_layers layers. Each layer is a LlamaDecoderLayer
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The LlamaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
LlamaForCausalLM
class transformers.LlamaForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Args —
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LlamaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LlamaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
LlamaForSequenceClassification
class transformers.LlamaForSequenceClassification
<
source
>
(
config
)
Parameters
config (LlamaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The LLaMa Model transformer with a sequence classification head on top (linear layer).
LlamaForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The LlamaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←LED
Llama2→
LLaMA
Overview
LlamaConfig
LlamaTokenizer
LlamaTokenizerFast
LlamaModel
LlamaForCausalLM
LlamaForSequenceClassification
|
Table Transformer
Overview
The Table Transformer model was proposed in PubTables-1M: Towards comprehensive table extraction from unstructured documents by
Brandon Smock, Rohith Pesala, Robin Abraham. The authors introduce a new dataset, PubTables-1M, to benchmark progress in table extraction from unstructured documents,
as well as table structure recognition and functional analysis. The authors train 2 DETR models, one for table detection and one for table structure recognition, dubbed Table Transformers.
The abstract from the paper is the following:
Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents.
However, one of the greatest challenges remains the creation of datasets with complete, unambiguous ground truth at scale. To address this, we develop a new, more
comprehensive dataset for table extraction, called PubTables-1M. PubTables-1M contains nearly one million tables from scientific articles, supports multiple input
modalities, and contains detailed header and location information for table structures, making it useful for a wide variety of modeling approaches. It also addresses a significant
source of ground truth inconsistency observed in prior datasets called oversegmentation, using a novel canonicalization procedure. We demonstrate that these improvements lead to a
significant increase in training performance and a more reliable estimate of model performance at evaluation for table structure recognition. Further, we show that transformer-based
object detection models trained on PubTables-1M produce excellent results for all three tasks of detection, structure recognition, and functional analysis without the need for any
special customization for these tasks.
Tips:
The authors released 2 models, one for table detection in documents, one for table structure recognition (the task of recognizing the individual rows, columns etc. in a table).
One can use the AutoImageProcessor API to prepare images and optional targets for the model. This will load a DetrImageProcessor behind the scenes.
Table detection and table structure recognition clarified. Taken from the original paper.
This model was contributed by nielsr. The original code can be
found here.
Resources
Object Detection
A demo notebook for the Table Transformer can be found here.
It turns out padding of images is quite important for detection. An interesting Github thread with replies from the authors can be found here.
TableTransformerConfig
class transformers.TableTransformerConfig
<
source
>
(
use_timm_backbone = True
backbone_config = None
num_channels = 3
num_queries = 100
encoder_layers = 6
encoder_ffn_dim = 2048
encoder_attention_heads = 8
decoder_layers = 6
decoder_ffn_dim = 2048
decoder_attention_heads = 8
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
is_encoder_decoder = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
init_xavier_std = 1.0
auxiliary_loss = False
position_embedding_type = 'sine'
backbone = 'resnet50'
use_pretrained_backbone = True
dilation = False
class_cost = 1
bbox_cost = 5
giou_cost = 2
mask_loss_coefficient = 1
dice_loss_coefficient = 1
bbox_loss_coefficient = 5
giou_loss_coefficient = 2
eos_coefficient = 0.1
**kwargs
)
Parameters
use_timm_backbone (bool, optional, defaults to True) —
Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone
API.
backbone_config (PretrainedConfig or dict, optional) —
The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which
case it will default to ResNetConfig().
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_queries (int, optional, defaults to 100) —
Number of object queries, i.e. detection slots. This is the maximal number of objects
TableTransformerModel can detect in a single image. For COCO, we recommend 100 queries.
d_model (int, optional, defaults to 256) —
Dimension of the layers.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (float, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (bool, optional, defaults to False) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (str, optional, defaults to "sine") —
Type of position embeddings to be used on top of the image features. One of "sine" or "learned".
backbone (str, optional, defaults to "resnet50") —
Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional
backbone from the timm package. For a list of all available models, see this
page.
use_pretrained_backbone (bool, optional, defaults to True) —
Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True.
dilation (bool, optional, defaults to False) —
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
use_timm_backbone = True.
class_cost (float, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (float, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (float, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (float, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
This is the configuration class to store the configuration of a TableTransformerModel. It is used to
instantiate a Table Transformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Table Transformer
microsoft/table-transformer-detection architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import TableTransformerModel, TableTransformerConfig
# Initializing a Table Transformer microsoft/table-transformer-detection style configuration
configuration = TableTransformerConfig()
# Initializing a model from the microsoft/table-transformer-detection style configuration
model = TableTransformerModel(configuration)
# Accessing the model configuration
configuration = model.config
TableTransformerModel
class transformers.TableTransformerModel
<
source
>
(
config: TableTransformerConfig
)
Parameters
config (TableTransformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Table Transformer Model (consisting of a backbone and encoder-decoder Transformer) outputting raw
hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.table_transformer.modeling_table_transformer.TableTransformerModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using DetrImageProcessor. See DetrImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.table_transformer.modeling_table_transformer.TableTransformerModelOutput or tuple(torch.FloatTensor)
A transformers.models.table_transformer.modeling_table_transformer.TableTransformerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TableTransformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
The TableTransformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TableTransformerModel
from huggingface_hub import hf_hub_download
from PIL import Image
file_path = hf_hub_download(repo_id="nielsr/example-pdf", repo_type="dataset", filename="example_pdf.png")
image = Image.open(file_path).convert("RGB")
image_processor = AutoImageProcessor.from_pretrained("microsoft/table-transformer-detection")
model = TableTransformerModel.from_pretrained("microsoft/table-transformer-detection")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# the last hidden states are the final query embeddings of the Transformer decoder
# these are of shape (batch_size, num_queries, hidden_size)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 15, 256]
TableTransformerForObjectDetection
class transformers.TableTransformerForObjectDetection
<
source
>
(
config: TableTransformerConfig
)
Parameters
config (TableTransformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Table Transformer Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on
top, for tasks such as COCO detection.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.table_transformer.modeling_table_transformer.TableTransformerObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using DetrImageProcessor. See DetrImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch
respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
Returns
transformers.models.table_transformer.modeling_table_transformer.TableTransformerObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.table_transformer.modeling_table_transformer.TableTransformerObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TableTransformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use ~TableTransformerImageProcessor.post_process_object_detection to
retrieve the unnormalized bounding boxes.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
The TableTransformerForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
from transformers import AutoImageProcessor, TableTransformerForObjectDetection
import torch
from PIL import Image
file_path = hf_hub_download(repo_id="nielsr/example-pdf", repo_type="dataset", filename="example_pdf.png")
image = Image.open(file_path).convert("RGB")
image_processor = AutoImageProcessor.from_pretrained("microsoft/table-transformer-detection")
model = TableTransformerForObjectDetection.from_pretrained("microsoft/table-transformer-detection")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[
... 0
... ]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected table with confidence 1.0 at location [202.1, 210.59, 1119.22, 385.09]
←Swin2SR
TimeSformer→
Table Transformer
Overview
Resources
TableTransformerConfig
TableTransformerModel
TableTransformerForObjectDetection
|
The documentation page MODEL_DOC/CTRL/MODEL_DOC/CTRL doesn’t exist in v4.31.0, but exists on the main version. Click here to redirect to the main version of the documentation. |
CPM
Overview
The CPM model was proposed in CPM: A Large-scale Generative Chinese Pre-trained Language Model by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin,
Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen,
Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
The abstract from the paper is the following:
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3,
with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even
zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus
of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the
Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best
of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained
language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation,
cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many
NLP tasks in the settings of few-shot (even zero-shot) learning.
This model was contributed by canwenxu. The original implementation can be found
here: https://github.com/TsinghuaAI/CPM-Generate
Note: We only have a tokenizer here, since the model architecture is the same as GPT-2.
CpmTokenizer
class transformers.CpmTokenizer
<
source
>
(
vocab_file
do_lower_case = False
remove_space = True
keep_accents = False
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
sep_token = '<sep>'
pad_token = '<pad>'
cls_token = '<cls>'
mask_token = '<mask>'
additional_special_tokens = ['<eop>', '<eod>']
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has the following format:
single sequence: X <sep> <cls>
pair of sequences: A <sep> B <sep> <cls>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (strings for sub-words) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
CpmTokenizerFast
class transformers.CpmTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = False
remove_space = True
keep_accents = False
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
sep_token = '<sep>'
pad_token = '<pad>'
cls_token = '<cls>'
mask_token = '<mask>'
additional_special_tokens = ['<eop>', '<eod>']
**kwargs
)
Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has the following format:
single sequence: X <sep> <cls>
pair of sequences: A <sep> B <sep> <cls>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
←ConvBERT
CPMANT→
CPM
Overview
CpmTokenizer
CpmTokenizerFast
|
M2M100
Overview
The M2M100 model was proposed in Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky,
Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy
Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
The abstract from the paper is the following:
Existing work in translation demonstrated the potential of massively multilingual machine translation by training a
single model able to translate between any pair of languages. However, much of this work is English-Centric by training
only on data which was translated from or to English. While this is supported by large sources of training data, it
does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation
model that can translate directly between any pair of 100 languages. We build and open source a training dataset that
covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how
to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters
to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly
translating between non-English directions while performing competitively to the best single systems of WMT. We
open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.
This model was contributed by valhalla.
Training and Generation
M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is
multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the
source and target text. The source text format is [lang_code] X [eos], where lang_code is source language
id for source text and target language id for target text, with X being the source or target text.
The M2M100Tokenizer depends on sentencepiece so be sure to install it before running the
examples. To install sentencepiece run pip install sentencepiece.
Supervised Training
Copied
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass
Generation
M2M100 uses the eos_token_id as the decoder_start_token_id for generation with the target language id
being forced as the first generated token. To force the target language id as the first generated token, pass the
forced_bos_token_id parameter to the generate method. The following example shows how to translate between
Hindi to French and Chinese to English using the facebook/m2m100_418M checkpoint.
Copied
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
# translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."
# translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Life is like a box of chocolate."
Documentation resources
Translation task guide
Summarization task guide
M2M100Config
class transformers.M2M100Config
<
source
>
(
vocab_size = 128112
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.05
decoder_layerdrop = 0.05
use_cache = True
is_encoder_decoder = True
activation_function = 'relu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.1
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 2
scale_embedding = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the M2M100 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling M2M100Model or
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a M2M100Model. It is used to instantiate an
M2M100 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the M2M100
facebook/m2m100_418M architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import M2M100Config, M2M100Model
# Initializing a M2M100 facebook/m2m100_418M style configuration
configuration = M2M100Config()
# Initializing a model (with random weights) from the facebook/m2m100_418M style configuration
model = M2M100Model(configuration)
# Accessing the model configuration
configuration = model.config
M2M100Tokenizer
class transformers.M2M100Tokenizer
<
source
>
(
vocab_file
spm_file
src_lang = None
tgt_lang = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
pad_token = '<pad>'
unk_token = '<unk>'
language_codes = 'm2m100'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
num_madeup_words = 8
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
spm_file (str) —
Path to SentencePiece file (generally has a .spm extension) that
contains the vocabulary.
src_lang (str, optional) —
A string representing the source language.
tgt_lang (str, optional) —
A string representing the target language.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
language_codes (str, optional, defaults to "m2m100") —
What language codes to use. Should be one of "m2m100" or "wmt21".
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct an M2M100 tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Examples:
Copied
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="ro")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
outputs = model(**model_inputs) # should work
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An MBART sequence has the following format, where X represents the sequence:
input_ids (for encoder) X [eos, src_lang_code]
decoder_input_ids: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
M2M100Model
class transformers.M2M100Model
<
source
>
(
config: M2M100Config
)
Parameters
config (M2M100Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare M2M100 Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
M2M100 uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (M2M100Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The M2M100Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, M2M100Model
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/m2m100_418M")
model = M2M100Model.from_pretrained("facebook/m2m100_418M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
M2M100ForConditionalGeneration
class transformers.M2M100ForConditionalGeneration
<
source
>
(
config: M2M100Config
)
Parameters
config (M2M100Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The M2M100 Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
M2M100 uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (M2M100Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The M2M100ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Translation example:
Copied
from transformers import AutoTokenizer, M2M100ForConditionalGeneration
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = AutoTokenizer.from_pretrained("facebook/m2m100_418M")
text_to_translate = "Life is like a box of chocolates"
model_inputs = tokenizer(text_to_translate, return_tensors="pt")
# translate to French
gen_tokens = model.generate(**model_inputs, forced_bos_token_id=tokenizer.get_lang_id("fr"))
print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
←LUKE
MarianMT→
M2M100
Overview
Training and Generation
Documentation resources
M2M100Config
M2M100Tokenizer
M2M100Model
M2M100ForConditionalGeneration
|
The documentation page MODEL_DOC/FALCON doesn’t exist in v4.31.0, but exists on the main version. Click here to redirect to the main version of the documentation. |
CLAP
Overview
The CLAP model was proposed in Large Scale Contrastive Language-Audio pretraining with
feature fusion and keyword-to-caption augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score.
The abstract from the paper is the following:
Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models’ results in the non-zero-shot setting. LAION-Audio-6
This model was contributed by Younes Belkada and Arthur Zucker .
The original code can be found here.
ClapConfig
class transformers.ClapConfig
<
source
>
(
text_config = None
audio_config = None
logit_scale_init_value = 14.285714285714285
projection_dim = 512
projection_hidden_act = 'relu'
initializer_factor = 1.0
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize ClapTextConfig.
audio_config (dict, optional) —
Dictionary of configuration options used to initialize ClapAudioConfig.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and audio projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original CLAP implementation.
projection_hidden_act (str, optional, defaults to "relu") —
Activation function for the projection layers.
initializer_factor (float, optional, defaults to 1.0) —
Factor to scale the initialization of the model weights.
kwargs (optional) —
Dictionary of keyword arguments.
ClapConfig is the configuration class to store the configuration of a ClapModel. It is used to instantiate
a CLAP model according to the specified arguments, defining the text model and audio model configs. Instantiating a
configuration with the defaults will yield a similar configuration to that of the CLAP
laion/clap-htsat-fused architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ClapConfig, ClapModel
# Initializing a ClapConfig with laion-ai/base style configuration
configuration = ClapConfig()
# Initializing a ClapModel (with random weights) from the laion-ai/base style configuration
model = ClapModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a ClapConfig from a ClapTextConfig and a ClapAudioConfig
from transformers import ClapTextConfig, ClapAudioConfig
# Initializing a ClapText and ClapAudioConfig configuration
config_text = ClapTextConfig()
config_audio = ClapAudioConfig()
config = ClapConfig.from_text_audio_configs(config_text, config_audio)
from_text_audio_configs
<
source
>
(
text_config: ClapTextConfig
audio_config: ClapAudioConfig
**kwargs
)
→
ClapConfig
Returns
ClapConfig
An instance of a configuration object
Instantiate a ClapConfig (or a derived class) from clap text model configuration and clap audio model
configuration.
ClapTextConfig
class transformers.ClapTextConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 514
type_vocab_size = 1
initializer_factor = 1.0
layer_norm_eps = 1e-12
projection_dim = 512
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
projection_hidden_act = 'relu'
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the CLAP model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling ClapTextModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "relu",
"relu", "silu" and "relu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling ClapTextModel.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
projection_hidden_act (str, optional, defaults to "relu") —
The non-linear activation function (function or string) in the projection layer. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
projection_dim (int, optional, defaults to 512) —
Dimension of the projection head of the ClapTextModelWithProjection.
This is the configuration class to store the configuration of a ClapTextModel. It is used to instantiate a CLAP
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the CLAP
calp-hsat-fused architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import ClapTextConfig, ClapTextModel
# Initializing a CLAP text configuration
configuration = ClapTextConfig()
# Initializing a model (with random weights) from the configuration
model = ClapTextModel(configuration)
# Accessing the model configuration
configuration = model.config
ClapAudioConfig
class transformers.ClapAudioConfig
<
source
>
(
window_size = 8
num_mel_bins = 64
spec_size = 256
hidden_act = 'gelu'
patch_size = 4
patch_stride = [4, 4]
num_classes = 527
hidden_size = 768
projection_dim = 512
depths = [2, 2, 6, 2]
num_attention_heads = [4, 8, 16, 32]
enable_fusion = False
hidden_dropout_prob = 0.1
fusion_type = None
patch_embed_input_channels = 1
flatten_patch_embeds = True
patch_embeds_hidden_size = 96
enable_patch_layer_norm = True
drop_path_rate = 0.0
attention_probs_dropout_prob = 0.0
qkv_bias = True
mlp_ratio = 4.0
aff_block_r = 4
num_hidden_layers = 4
projection_hidden_act = 'relu'
layer_norm_eps = 1e-05
initializer_factor = 1.0
**kwargs
)
Parameters
window_size (int, optional, defaults to 8) —
Image size of the spectrogram
num_mel_bins (int, optional, defaults to 64) —
Number of mel features used per frames. Should correspond to the value used in the ClapProcessor class.
spec_size (int, optional, defaults to 256) —
Desired input size of the spectrogram that the model supports. It can be different from the output of the
ClapFeatureExtractor, in which case the input features will be resized. Corresponds to the image_size
of the audio models.
hidden_act (str, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
patch_size (int, optional, defaults to 4) —
Patch size for the audio spectrogram
patch_stride (list, optional, defaults to [4, 4]) —
Patch stride for the audio spectrogram
num_classes (int, optional, defaults to 527) —
Number of classes used for the head training
hidden_size (int, optional, defaults to 768) —
Hidden size of the output of the audio encoder. Correspond to the dimension of the penultimate layer’s
output,which is sent to the projection MLP layer.
projection_dim (int, optional, defaults to 512) —
Hidden size of the projection layer.
depths (list, optional, defaults to [2, 2, 6, 2]) —
Depths used for the Swin Layers of the audio model
num_attention_heads (list, optional, defaults to [4, 8, 16, 32]) —
Number of attention heads used for the Swin Layers of the audio model
enable_fusion (bool, optional, defaults to False) —
Whether or not to enable patch fusion. This is the main contribution of the authors, and should give the
best results.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the encoder.
fusion_type ([type], optional) —
Fusion type used for the patch fusion.
patch_embed_input_channels (int, optional, defaults to 1) —
Number of channels used for the input spectrogram
flatten_patch_embeds (bool, optional, defaults to True) —
Whether or not to flatten the patch embeddings
patch_embeds_hidden_size (int, optional, defaults to 96) —
Hidden size of the patch embeddings. It is used as the number of output channels.
enable_patch_layer_norm (bool, optional, defaults to True) —
Whether or not to enable layer normalization for the patch embeddings
drop_path_rate (float, optional, defaults to 0.0) —
Drop path rate for the patch fusion
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
qkv_bias (bool, optional, defaults to True) —
Whether or not to add a bias to the query, key, value projections.
mlp_ratio (float, optional, defaults to 4.0) —
Ratio of the mlp hidden dim to embedding dim.
aff_block_r (int, optional, defaults to 4) —
downsize_ratio used in the AudioFF block
num_hidden_layers (int, optional, defaults to 4) —
Number of hidden layers in the Transformer encoder.
projection_hidden_act (str, optional, defaults to "relu") —
The non-linear activation function (function or string) in the projection layer. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
layer_norm_eps ([type], optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
initializer_factor (float, optional, defaults to 1.0) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a ClapAudioModel. It is used to instantiate a
CLAP audio encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the audio encoder of the CLAP
laion/clap-htsat-fused architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ClapAudioConfig, ClapAudioModel
# Initializing a ClapAudioConfig with laion/clap-htsat-fused style configuration
configuration = ClapAudioConfig()
# Initializing a ClapAudioModel (with random weights) from the laion/clap-htsat-fused style configuration
model = ClapAudioModel(configuration)
# Accessing the model configuration
configuration = model.config
ClapFeatureExtractor
class transformers.ClapFeatureExtractor
<
source
>
(
feature_size = 64
sampling_rate = 48000
hop_length = 480
max_length_s = 10
fft_window_size = 1024
padding_value = 0.0
return_attention_mask = False
frequency_min: float = 0
frequency_max: float = 14000
top_db: int = None
truncation: str = 'fusion'
padding: str = 'repeatpad'
**kwargs
)
Parameters
feature_size (int, defaults to 64) —
The feature dimension of the extracted Mel spectrograms. This corresponds to the number of mel filters
(n_mels).
sampling_rate (int, defaults to 48_000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). This only serves
to warn users if the audio fed to the feature extractor does not have the same sampling rate.
hop_length (int, defaults to 480) —
Length of the overlaping windows for the STFT used to obtain the Mel Spectrogram. The audio will be split
in smaller frames with a step of hop_length between each frame.
max_length_s (int, defaults to 10) —
The maximum input lenght of the model in seconds. This is used to pad the audio.
fft_window_size (int, defaults to 1024) —
Size of the window (in samples) on which the Fourier transform is applied. This controls the frequency
resolution of the spectrogram. 400 means that the fourrier transform is computed on windows of 400 samples.
padding_value (float, optional, defaults to 0.0) —
Padding value used to pad the audio. Should correspond to silences.
return_attention_mask (bool, optional, defaults to False) —
Whether or not the model should return the attention masks coresponding to the input.
frequency_min (float, optional, default to 0) —
The lowest frequency of interest. The STFT will not be computed for values below this.
frequency_max (float, optional, default to 14_000) —
The highest frequency of interest. The STFT will not be computed for values above this.
top_db (float, optional) —
The highest decibel value used to convert the mel spectrogram to the log scale. For more details see the
audio_utils.power_to_db function
truncation (str, optional, default to "fusions") —
Truncation pattern for long audio inputs. Two patterns are available:
fusion will use _random_mel_fusion, which stacks 3 random crops from the mel spectrogram and a
downsampled version of the entire mel spectrogram.
If config.fusion is set to True, shorter audios also need to to return 4 mels, which will just be a copy
of the original mel obtained from the padded audio.
rand_trunc will select a random crop of the mel spectrogram.
padding (str, optional, defaults to "repeatpad") —
Padding pattern for shorter audio inputs. Three patterns were originally implemented:
repeatpad: the audio is repeated, and then padded to fit the max_length.
repeat: the audio is repeated and then cut to fit the max_length
pad: the audio is padded.
Constructs a CLAP feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech using a custom numpy implementation of the Short Time
Fourier Transform (STFT) which should match pytorch’s torch.stft equivalent.
to_dict
<
source
>
(
)
→
Dict[str, Any]
Returns
Dict[str, Any]
Dictionary of all the attributes that make up this configuration instance, excpet for the
mel filter banks, which do not need to be saved or printed as they are too long.
Serializes this instance to a Python dictionary.
ClapProcessor
class transformers.ClapProcessor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (ClapFeatureExtractor) —
The audio processor is a required input.
tokenizer (RobertaTokenizerFast) —
The tokenizer is a required input.
Constructs a CLAP processor which wraps a CLAP feature extractor and a RoBerta tokenizer into a single processor.
ClapProcessor offers all the functionalities of ClapFeatureExtractor and RobertaTokenizerFast. See the
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to RobertaTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to RobertaTokenizerFast’s decode(). Please refer
to the docstring of this method for more information.
ClapModel
class transformers.ClapModel
<
source
>
(
config: ClapConfig
)
Parameters
config (ClapConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
input_features: typing.Optional[torch.FloatTensor] = None
is_longer: typing.Optional[torch.BoolTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clap.modeling_clap.ClapOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also
retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clap.modeling_clap.ClapOutput or tuple(torch.FloatTensor)
A transformers.models.clap.modeling_clap.ClapOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for audio-text similarity.
logits_per_audio:(torch.FloatTensor of shape (audio_batch_size, text_batch_size)) — The scaled dot product scores between audio_embeds and text_embeds. This represents the audio-text
similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, audio_batch_size)) — The scaled dot product scores between text_embeds and audio_embeds. This represents the text-audio
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of ClapTextModel.
audio_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The audio embeddings obtained by applying the projection layer to the pooled output of ClapAudioModel.
text_model_output(BaseModelOutputWithPooling):
The output of the ClapTextModel.
audio_model_output(BaseModelOutputWithPooling):
The output of the ClapAudioModel.
The ClapModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from datasets import load_dataset
from transformers import AutoProcessor, ClapModel
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
processor = AutoProcessor.from_pretrained("laion/clap-htsat-unfused")
input_text = ["Sound of a dog", "Sound of vaccum cleaner"]
inputs = processor(text=input_text, audios=audio_sample, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_audio = outputs.logits_per_audio # this is the audio-text similarity score
probs = logits_per_audio.softmax(dim=-1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of ClapTextModel.
The ClapModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, ClapModel
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused")
inputs = tokenizer(["the sound of a cat", "the sound of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_audio_features
<
source
>
(
input_features: typing.Optional[torch.Tensor] = None
is_longer: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
audio_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also
retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details.
is_longer (torch.FloatTensor, of shape (batch_size, 1), optional) —
Whether the audio clip is longer than max_length. If True, a feature fusion will be enabled to enhance
the features.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
audio_features (torch.FloatTensor of shape (batch_size, output_dim)
The audio embeddings obtained by
applying the projection layer to the pooled output of ClapAudioModel.
The ClapModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoFeatureExtractor, ClapModel
import torch
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
feature_extractor = AutoFeatureExtractor.from_pretrained("laion/clap-htsat-unfused")
random_audio = torch.rand((16_000))
inputs = feature_extractor(random_audio, return_tensors="pt")
audio_features = model.get_audio_features(**inputs)
ClapTextModel
class transformers.ClapTextModel
<
source
>
(
config
add_pooling_layer = True
)
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)):
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional):
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
ClapTextModelWithProjection
class transformers.ClapTextModelWithProjection
<
source
>
(
config: ClapTextConfig
)
Parameters
config (ClapConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CLAP Text Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clap.modeling_clap.ClapTextModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clap.modeling_clap.ClapTextModelOutput or tuple(torch.FloatTensor)
A transformers.models.clap.modeling_clap.ClapTextModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapTextConfig'>) and inputs.
text_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The text embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ClapTextModelWithProjection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, ClapTextModelWithProjection
model = ClapTextModelWithProjection.from_pretrained("laion/clap-htsat-unfused")
tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused")
inputs = tokenizer(["a sound of a cat", "a sound of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
text_embeds = outputs.text_embeds
ClapAudioModel
class transformers.ClapAudioModel
<
source
>
(
config: ClapAudioConfig
)
forward
<
source
>
(
input_features: typing.Optional[torch.FloatTensor] = None
is_longer: typing.Optional[torch.BoolTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also
retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details.
is_longer (torch.FloatTensor, of shape (batch_size, 1), optional) —
Whether the audio clip is longer than max_length. If True, a feature fusion will be enabled to enhance
the features.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapAudioConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ClapAudioModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from datasets import load_dataset
from transformers import AutoProcessor, ClapAudioModel
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
model = ClapAudioModel.from_pretrained("laion/clap-htsat-fused")
processor = AutoProcessor.from_pretrained("laion/clap-htsat-fused")
inputs = processor(audios=audio_sample, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
ClapAudioModelWithProjection
class transformers.ClapAudioModelWithProjection
<
source
>
(
config: ClapAudioConfig
)
Parameters
config (ClapConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CLAP Audio Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_features: typing.Optional[torch.FloatTensor] = None
is_longer: typing.Optional[torch.BoolTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clap.modeling_clap.ClapAudioModelOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also
retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details.
is_longer (torch.FloatTensor, of shape (batch_size, 1), optional) —
Whether the audio clip is longer than max_length. If True, a feature fusion will be enabled to enhance
the features.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clap.modeling_clap.ClapAudioModelOutput or tuple(torch.FloatTensor)
A transformers.models.clap.modeling_clap.ClapAudioModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapAudioConfig'>) and inputs.
audio_embeds (torch.FloatTensor of shape (batch_size, hidden_size)) — The Audio embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The ClapAudioModelWithProjection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from datasets import load_dataset
from transformers import ClapAudioModelWithProjection, ClapProcessor
model = ClapAudioModelWithProjection.from_pretrained("laion/clap-htsat-fused")
processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
inputs = processor(audios=audio_sample, return_tensors="pt")
outputs = model(**inputs)
audio_embeds = outputs.audio_embeds
←Bark
EnCodec→
CLAP
Overview
ClapConfig
ClapTextConfig
ClapAudioConfig
ClapFeatureExtractor
ClapProcessor
ClapModel
ClapTextModel
ClapTextModelWithProjection
ClapAudioModel
ClapAudioModelWithProjection
|
MobileViTV2
Overview
The MobileViTV2 model was proposed in Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari.
MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
The abstract from the paper is the following:
Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device.
Tips:
MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map.
One can use MobileViTImageProcessor to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
This model was contributed by shehan97.
The original code can be found here.
MobileViTV2Config
class transformers.MobileViTV2Config
<
source
>
(
num_channels = 3
image_size = 256
patch_size = 2
expand_ratio = 2.0
hidden_act = 'swish'
conv_kernel_size = 3
output_stride = 32
classifier_dropout_prob = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
aspp_out_channels = 512
atrous_rates = [6, 12, 18]
aspp_dropout_prob = 0.1
semantic_loss_ignore_index = 255
n_attn_blocks = [2, 4, 3]
base_attn_unit_dims = [128, 192, 256]
width_multiplier = 1.0
ffn_multiplier = 2
attn_dropout = 0.0
ffn_dropout = 0.0
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
image_size (int, optional, defaults to 256) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 2) —
The size (resolution) of each patch.
expand_ratio (float, optional, defaults to 2.0) —
Expansion factor for the MobileNetv2 layers.
hidden_act (str or function, optional, defaults to "swish") —
The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
conv_kernel_size (int, optional, defaults to 3) —
The size of the convolutional kernel in the MobileViTV2 layer.
output_stride (int, optional, defaults to 32) —
The ratio of the spatial resolution of the output to the resolution of the input image.
classifier_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for attached classifiers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
aspp_out_channels (int, optional, defaults to 512) —
Number of output channels used in the ASPP layer for semantic segmentation.
atrous_rates (List[int], optional, defaults to [6, 12, 18]) —
Dilation (atrous) factors used in the ASPP layer for semantic segmentation.
aspp_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the ASPP layer for semantic segmentation.
semantic_loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
n_attn_blocks (List[int], optional, defaults to [2, 4, 3]) —
The number of attention blocks in each MobileViTV2Layer
base_attn_unit_dims (List[int], optional, defaults to [128, 192, 256]) —
The base multiplier for dimensions of attention blocks in each MobileViTV2Layer
width_multiplier (float, optional, defaults to 1.0) —
The width multiplier for MobileViTV2.
ffn_multiplier (int, optional, defaults to 2) —
The FFN multiplier for MobileViTV2.
attn_dropout (float, optional, defaults to 0.0) —
The dropout in the attention layer.
ffn_dropout (float, optional, defaults to 0.0) —
The dropout between FFN layers.
This is the configuration class to store the configuration of a MobileViTV2Model. It is used to instantiate a
MobileViTV2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileViTV2
apple/mobilevitv2-1.0 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MobileViTV2Config, MobileViTV2Model
# Initializing a mobilevitv2-small style configuration
configuration = MobileViTV2Config()
# Initializing a model from the mobilevitv2-small style configuration
model = MobileViTV2Model(configuration)
# Accessing the model configuration
configuration = model.config
MobileViTV2Model
class transformers.MobileViTV2Model
<
source
>
(
config: MobileViTV2Config
expand_output: bool = True
)
Parameters
config (MobileViTV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileViTV2 model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileViTV2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The MobileViTV2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileViTV2Model
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256")
model = MobileViTV2Model.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 512, 8, 8]
MobileViTV2ForImageClassification
class transformers.MobileViTV2ForImageClassification
<
source
>
(
config: MobileViTV2Config
)
Parameters
config (MobileViTV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileViTV2 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss). If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileViTV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The MobileViTV2ForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileViTV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256")
model = MobileViTV2ForImageClassification.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
MobileViTV2ForSemanticSegmentation
class transformers.MobileViTV2ForSemanticSegmentation
<
source
>
(
config: MobileViTV2Config
)
Parameters
config (MobileViTV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileViTV2 model with a semantic segmentation head on top, e.g. for Pascal VOC.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileViTV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileViTV2ForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, MobileViTV2ForSemanticSegmentation
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256")
model = MobileViTV2ForSemanticSegmentation.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
←MobileViT
NAT→
MobileViTV2
Overview
MobileViTV2Config
MobileViTV2Model
MobileViTV2ForImageClassification
MobileViTV2ForSemanticSegmentation
|
OpenAI GPT2
Overview
OpenAI GPT-2 model was proposed in Language Models are Unsupervised Multitask Learners by Alec
Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from OpenAI. It’s a causal (unidirectional)
transformer pretrained using language modeling on a very large corpus of ~40 GB of text data.
The abstract from the paper is the following:
GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million
web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some
text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks
across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than
10X the amount of data.
Tips:
GPT-2 is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) or past (for TF) as input, which is the previously computed
key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing
pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the
GPT2Model.forward() method, or for TF the past argument of the
TFGPT2Model.call() method for more information on its usage.
Enabling the scale_attn_by_inverse_layer_idx and reorder_and_upcast_attn flags will apply the training stability
improvements from Mistral (for PyTorch only).
Write With Transformer is a webapp created and hosted by
Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five
different sizes: small, medium, large, xl and a distilled version of the small checkpoint: distilgpt-2.
This model was contributed by thomwolf. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Generation
A blog on how to Finetune a non-English GPT-2 Model with Hugging Face.
A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2.
A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model.
A blog on Faster Text Generation with TensorFlow and XLA with GPT-2.
A blog on How to train a Language Model with Megatron-LM with a GPT-2 model.
A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎
A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎
Causal language modeling chapter of the 🤗 Hugging Face Course.
GPT2LMHeadModel is supported by this causal language modeling example script, text generation example script, and notebook.
TFGPT2LMHeadModel is supported by this causal language modeling example script and notebook.
FlaxGPT2LMHeadModel is supported by this causal language modeling example script and notebook.
Text classification task guide
Token classification task guide
Causal language modeling task guide
GPT2Config
class transformers.GPT2Config
<
source
>
(
vocab_size = 50257
n_positions = 1024
n_embd = 768
n_layer = 12
n_head = 12
n_inner = None
activation_function = 'gelu_new'
resid_pdrop = 0.1
embd_pdrop = 0.1
attn_pdrop = 0.1
layer_norm_epsilon = 1e-05
initializer_range = 0.02
summary_type = 'cls_index'
summary_use_proj = True
summary_activation = None
summary_proj_to_labels = True
summary_first_dropout = 0.1
scale_attn_weights = True
use_cache = True
bos_token_id = 50256
eos_token_id = 50256
scale_attn_by_inverse_layer_idx = False
reorder_and_upcast_attn = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50257) —
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling GPT2Model or TFGPT2Model.
n_positions (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 768) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (int, optional, defaults to None) —
Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd
activation_function (str, optional, defaults to "gelu") —
Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"].
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
summary_type (string, optional, defaults to "cls_index") —
Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and
TFGPT2DoubleHeadsModel.
Has to be one of the following options:
"last": Take the last token hidden state (like XLNet).
"first": Take the first token hidden state (like BERT).
"mean": Take the mean of all tokens hidden states.
"cls_index": Supply a Tensor of classification token position (like GPT/GPT-2).
"attn": Not implemented now, use multi-head attention.
summary_use_proj (bool, optional, defaults to True) —
Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and
TFGPT2DoubleHeadsModel.
Whether or not to add a projection after the vector extraction.
summary_activation (str, optional) —
Argument used when doing sequence summary. Used in for the multiple choice head in
GPT2DoubleHeadsModel.
Pass "tanh" for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (bool, optional, defaults to True) —
Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and
TFGPT2DoubleHeadsModel.
Whether the projection outputs should have config.num_labels or config.hidden_size classes.
summary_first_dropout (float, optional, defaults to 0.1) —
Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and
TFGPT2DoubleHeadsModel.
The dropout ratio to be used after the projection and activation.
scale_attn_weights (bool, optional, defaults to True) —
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
scale_attn_by_inverse_layer_idx (bool, optional, defaults to False) —
Whether to additionally scale attention weights by 1 / layer_idx + 1.
reorder_and_upcast_attn (bool, optional, defaults to False) —
Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention
dot-product/softmax to float() when training with mixed precision.
This is the configuration class to store the configuration of a GPT2Model or a TFGPT2Model. It is used to
instantiate a GPT-2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPT-2
gpt2 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GPT2Config, GPT2Model
# Initializing a GPT2 configuration
configuration = GPT2Config()
# Initializing a model (with random weights) from the configuration
model = GPT2Model(configuration)
# Accessing the model configuration
configuration = model.config
GPT2Tokenizer
class transformers.GPT2Tokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
pad_token = None
add_prefix_space = False
add_bos_token = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|endoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (GPT2 tokenizer detect beginning of words by the preceding space).
Construct a GPT-2 tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
tokenizer("Hello world")["input_ids"]
[15496, 995]
tokenizer(" Hello world")["input_ids"]
[18435, 995]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
GPT2TokenizerFast
class transformers.GPT2TokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|endoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (GPT2 tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
Construct a “fast” GPT-2 tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
tokenizer("Hello world")["input_ids"]
[15496, 995]
tokenizer(" Hello world")["input_ids"]
[18435, 995]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
GPT2 specific outputs
class transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
mc_loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
mc_logits: FloatTensor = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) —
Multiple choice classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) —
Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
past_key_values (Tuple[Tuple[torch.Tensor]], optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of length config.n_layers, containing tuples of tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
GPT2Attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads.
Base class for outputs of models predicting if two sentences are consecutive or not.
class transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput
<
source
>
(
logits: tf.Tensor = None
mc_logits: tf.Tensor = None
past_key_values: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (tf.Tensor of shape (batch_size, num_choices)) —
Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for outputs of models predicting if two sentences are consecutive or not.
GPT2Model
class transformers.GPT2Model
<
source
>
(
config
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The GPT2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPT2Model
import torch
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPT2Model.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
GPT2LMHeadModel
class transformers.GPT2LMHeadModel
<
source
>
(
config
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The GPT2LMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, GPT2LMHeadModel
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
GPT2DoubleHeadsModel
class transformers.GPT2DoubleHeadsModel
<
source
>
(
config
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for
RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the
input embeddings, the classification head takes as input the input of a specified classification token index in the
input sequence).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
mc_token_ids: typing.Optional[torch.LongTensor] = None
labels: typing.Optional[torch.LongTensor] = None
mc_labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
mc_token_ids (torch.LongTensor of shape (batch_size, num_choices), optional, default to index of the last token of the input) —
Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1].
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to
-100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1]
mc_labels (torch.LongTensor of shape (batch_size), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (see input_ids above)
Returns
transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput or tuple(torch.FloatTensor)
A transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) — Multiple choice classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
past_key_values (Tuple[Tuple[torch.Tensor]], optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of length config.n_layers, containing tuples of tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
GPT2Attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads.
The GPT2DoubleHeadsModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, GPT2DoubleHeadsModel
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPT2DoubleHeadsModel.from_pretrained("gpt2")
# Add a [CLS] to the vocabulary (we should train it also!)
num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"})
# Update the model embeddings with the new vocabulary size
embedding_layer = model.resize_token_embeddings(len(tokenizer))
choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
encoded_choices = [tokenizer.encode(s) for s in choices]
cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]
input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2
mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_logits = outputs.logits
mc_logits = outputs.mc_logits
GPT2ForQuestionAnswering
class transformers.GPT2ForQuestionAnswering
<
source
>
(
config
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-2 Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPT2ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
gpt2 instead of gpt2. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
from transformers import AutoTokenizer, GPT2ForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = GPT2ForQuestionAnswering.from_pretrained("gpt2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
GPT2ForSequenceClassification
class transformers.GPT2ForSequenceClassification
<
source
>
(
config
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT2 Model transformer with a sequence classification head on top (linear layer).
GPT2ForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPT2ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPT2ForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown")
model = GPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPT2ForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown")
model = GPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPT2ForSequenceClassification.from_pretrained(
... "microsoft/DialogRPT-updown", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
GPT2ForTokenClassification
class transformers.GPT2ForTokenClassification
<
source
>
(
config
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GPT2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPT2ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPT2ForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("brad1141/gpt2-finetuned-comp2")
model = GPT2ForTokenClassification.from_pretrained("brad1141/gpt2-finetuned-comp2")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['Lead', 'Lead', 'Lead', 'Position', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.25
TFGPT2Model
class transformers.TFGPT2Model
<
source
>
(
*args
**kwargs
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The token ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past are used, the user can optionally input only the last decoder_input_ids (those that don’t have
their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPT2Config) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFGPT2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPT2Model
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = TFGPT2Model.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFGPT2LMHeadModel
class transformers.TFGPT2LMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The token ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past are used, the user can optionally input only the last decoder_input_ids (those that don’t have
their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past). Set to False during training, True during generation
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPT2Config) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The TFGPT2LMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPT2LMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = TFGPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFGPT2DoubleHeadsModel
class transformers.TFGPT2DoubleHeadsModel
<
source
>
(
*args
**kwargs
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for
RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the
input embeddings, the classification head takes as input the input of a specified classification token index in the
input sequence).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
mc_token_ids: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The token ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
mc_token_ids (tf.Tensor or Numpy array of shape (batch_size, num_choices), optional, default to index of the last token of the input) —
Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1].
Returns
transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput or tuple(tf.Tensor)
A transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPT2Config) and inputs.
logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (tf.Tensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGPT2DoubleHeadsModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFGPT2DoubleHeadsModel
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = TFGPT2DoubleHeadsModel.from_pretrained("gpt2")
# Add a [CLS] to the vocabulary (we should train it also!)
num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"})
embedding_layer = model.resize_token_embeddings(
... len(tokenizer)
... ) # Update the model embeddings with the new vocabulary size
choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
encoded_choices = [tokenizer.encode(s) for s in choices]
cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices]
input_ids = tf.constant(encoded_choices)[None, :] # Batch size: 1, number of choices: 2
mc_token_ids = tf.constant([cls_token_location]) # Batch size: 1
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_prediction_scores, mc_prediction_scores = outputs[:2]
TFGPT2ForSequenceClassification
class transformers.TFGPT2ForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT2 Model transformer with a sequence classification head on top (linear layer).
TFGPT2ForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The token ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPT2Config) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGPT2ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPT2ForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown")
model = TFGPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFGPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFSequenceClassifierOutputWithPast
class transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
past_key_values: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for outputs of sentence classification models.
TFGPT2Tokenizer
class transformers.TFGPT2Tokenizer
<
source
>
(
*args
**kwargs
)
Parameters
vocab (Dict[str, int]) — Vocabulary dict for Byte Pair Tokenizer
merges (List[str]) — Merges list for Byte Pair Tokenizer
This is an in-graph tokenizer for GPT2. It should be initialized similarly to other tokenizers, using the
from_pretrained() method. It can also be initialized with the from_tokenizer() method, which imports settings
from an existing standard tokenizer object.
In-graph tokenizers, unlike other Hugging Face tokenizers, are actually Keras layers and are designed to be run
when the model is called, rather than during preprocessing. As a result, they have somewhat more limited options
than standard tokenizer classes. They are most useful when you want to create an end-to-end model that goes
straight from tf.string inputs to outputs.
from_config
<
source
>
(
config
)
Parameters
config (Dict) — Dictionary with keys such as stated in get_config.
Creates TFGPT2Tokenizer from configurations
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
*init_inputs
**kwargs
)
Parameters
pretrained_model_name_or_path (Union[str, os.PathLike]) — Path to pretrained model
Creates TFGPT2Tokenizer from pretrained GPT2Tokenizer
Examples:
Copied
from transformers import TFGPT2Tokenizer
tf_tokenizer = TFGPT2Tokenizer.from_pretrained("gpt2")
from_tokenizer
<
source
>
(
tokenizer: GPT2Tokenizer
*args
**kwargs
)
Parameters
tokenizer (GPT2Tokenizer) —
Creates TFGPT2Tokenizer from GPT2Tokenizer
Examples:
Copied
from transformers import AutoTokenizer, TFGPT2Tokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tf_tokenizer = TFGPT2Tokenizer.from_tokenizer(tokenizer)
FlaxGPT2Model
class transformers.FlaxGPT2Model
<
source
>
(
config: GPT2Config
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
encoder_hidden_states: typing.Optional[jax.Array] = None
encoder_attention_mask: typing.Optional[jax.Array] = None
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The FlaxGPT2PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxGPT2Model
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = FlaxGPT2Model.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxGPT2LMHeadModel
class transformers.FlaxGPT2LMHeadModel
<
source
>
(
config: GPT2Config
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (GPT2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
encoder_hidden_states: typing.Optional[jax.Array] = None
encoder_attention_mask: typing.Optional[jax.Array] = None
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPT2Config) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxGPT2PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxGPT2LMHeadModel
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = FlaxGPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
←GPT-J
GPTBigCode→
OpenAI GPT2
Overview
Resources
GPT2Config
GPT2Tokenizer
GPT2TokenizerFast
GPT2 specific outputs
GPT2Model
GPT2LMHeadModel
GPT2DoubleHeadsModel
GPT2ForQuestionAnswering
GPT2ForSequenceClassification
GPT2ForTokenClassification
TFGPT2Model
TFGPT2LMHeadModel
TFGPT2DoubleHeadsModel
TFGPT2ForSequenceClassification
TFSequenceClassifierOutputWithPast
TFGPT2Tokenizer
FlaxGPT2Model
FlaxGPT2LMHeadModel
|
MobileNet V1
Overview
The MobileNet model was proposed in MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
The abstract from the paper is the following:
We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
Tips:
The checkpoints are named mobilenet_v1_depth_size, for example mobilenet_v1_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as “alpha” or the width multiplier) and 224 is the resolution of the input images the model was trained on.
Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
One can use MobileNetV1ImageProcessor to prepare images for the model.
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0).
The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a MobileNetV1Config with tf_padding = False.
Unsupported features:
The MobileNetV1Model outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this.
It is currently not possible to specify an output_stride. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32.
The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional “FakeQuantization” operations to unquantize the weights.
It’s common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
This model was contributed by matthijs. The original code and weights can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1.
Image Classification
MobileNetV1ForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileNetV1Config
class transformers.MobileNetV1Config
<
source
>
(
num_channels = 3
image_size = 224
depth_multiplier = 1.0
min_depth = 8
hidden_act = 'relu6'
tf_padding = True
classifier_dropout_prob = 0.999
initializer_range = 0.02
layer_norm_eps = 0.001
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
depth_multiplier (float, optional, defaults to 1.0) —
Shrinks or expands the number of channels in each layer. Default is 1.0, which starts the network with 32
channels. This is sometimes also called “alpha” or “width multiplier”.
min_depth (int, optional, defaults to 8) —
All layers will have at least this many channels.
hidden_act (str or function, optional, defaults to "relu6") —
The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
tf_padding (bool, optional, defaults to True) —
Whether to use TensorFlow padding rules on the convolution layers.
classifier_dropout_prob (float, optional, defaults to 0.999) —
The dropout ratio for attached classifiers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 0.001) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a MobileNetV1Model. It is used to instantiate a
MobileNetV1 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileNetV1
google/mobilenet_v1_1.0_224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MobileNetV1Config, MobileNetV1Model
# Initializing a "mobilenet_v1_1.0_224" style configuration
configuration = MobileNetV1Config()
# Initializing a model from the "mobilenet_v1_1.0_224" style configuration
model = MobileNetV1Model(configuration)
# Accessing the model configuration
configuration = model.config
MobileNetV1FeatureExtractor
class transformers.MobileNetV1FeatureExtractor
<
source
>
(
*args
**kwargs
)
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (PILImageResampling filter, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use if do_normalize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
MobileNetV1ImageProcessor
class transformers.MobileNetV1ImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 256}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image
is padded with 0’s and then center cropped. Can be overridden by the do_center_crop parameter in the
preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Desired output size when applying center-cropping. Only has an effect if do_center_crop is set to True.
Can be overridden by the crop_size parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a MobileNetV1 image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (PILImageResampling filter, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use if do_normalize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
MobileNetV1Model
class transformers.MobileNetV1Model
<
source
>
(
config: MobileNetV1Config
add_pooling_layer: bool = True
)
Parameters
config (MobileNetV1Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileNetV1 model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileNetV1ImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileNetV1Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The MobileNetV1Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileNetV1Model
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/mobilenet_v1_1.0_224")
model = MobileNetV1Model.from_pretrained("google/mobilenet_v1_1.0_224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1024, 7, 7]
MobileNetV1ForImageClassification
class transformers.MobileNetV1ForImageClassification
<
source
>
(
config: MobileNetV1Config
)
Parameters
config (MobileNetV1Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileNetV1 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileNetV1ImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss). If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileNetV1Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The MobileNetV1ForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileNetV1ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/mobilenet_v1_1.0_224")
model = MobileNetV1ForImageClassification.from_pretrained("google/mobilenet_v1_1.0_224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←MaskFormer
MobileNetV2→
MobileNet V1
Overview
Resources
MobileNetV1Config
MobileNetV1FeatureExtractor
MobileNetV1ImageProcessor
MobileNetV1Model
MobileNetV1ForImageClassification
|
XLM-RoBERTa-XL
Overview
The XLM-RoBERTa-XL model was proposed in Larger-Scale Transformers for Multilingual Masked Language Modeling by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
The abstract from the paper is the following:
Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
Tips:
XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does
not require lang tensors to understand which language is used, and should be able to determine the correct
language from the input ids.
This model was contributed by Soonhwan-Kwon and stefan-it. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XLMRobertaXLConfig
class transformers.XLMRobertaXLConfig
<
source
>
(
vocab_size = 250880
hidden_size = 2560
num_hidden_layers = 36
num_attention_heads = 32
intermediate_size = 10240
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 514
type_vocab_size = 1
initializer_range = 0.02
layer_norm_eps = 1e-05
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250880) —
Vocabulary size of the XLM_ROBERTA_XL model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling XLMRobertaXLModel.
hidden_size (int, optional, defaults to 2560) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 36) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 10240) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 514) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 1) —
The vocabulary size of the token_type_ids passed when calling XLMRobertaXLModel or
TFXLMRobertaXLModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a XLMRobertaXLModel or a TFXLMRobertaXLModel.
It is used to instantiate a XLM_ROBERTA_XL model according to the specified arguments, defining the model
architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the
XLM_ROBERTA_XL facebook/xlm-roberta-xl architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import XLMRobertaXLConfig, XLMRobertaXLModel
# Initializing a XLM_ROBERTA_XL bert-base-uncased style configuration
configuration = XLMRobertaXLConfig()
# Initializing a model (with random weights) from the bert-base-uncased style configuration
model = XLMRobertaXLModel(configuration)
# Accessing the model configuration
configuration = model.config
XLMRobertaXLModel
class transformers.XLMRobertaXLModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLM-RoBERTa-xlarge Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you needby Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder
argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with
both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as
an input to the forward pass. .. Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The XLMRobertaXLModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaXLModel
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLModel.from_pretrained("xlm-roberta-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XLMRobertaXLForCausalLM
class transformers.XLMRobertaXLForCausalLM
<
source
>
(
config
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa-xlarge Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The XLMRobertaXLForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RobertaForCausalLM, RobertaConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaForCausalLM.from_pretrained("roberta-base", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
XLMRobertaXLForMaskedLM
class transformers.XLMRobertaXLForMaskedLM
<
source
>
(
config
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa-xlarge Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaXLForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaXLForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForMaskedLM.from_pretrained("xlm-roberta-xlarge")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
XLMRobertaXLForSequenceClassification
class transformers.XLMRobertaXLForSequenceClassification
<
source
>
(
config
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-RoBERTa-xlarge Model transformer with a sequence classification/regression head on top (a linear layer on top
of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaXLForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLMRobertaXLForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLMRobertaXLForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForSequenceClassification.from_pretrained("xlm-roberta-xlarge", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLMRobertaXLForSequenceClassification.from_pretrained(
... "xlm-roberta-xlarge", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
XLMRobertaXLForMultipleChoice
class transformers.XLMRobertaXLForMultipleChoice
<
source
>
(
config
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-Roberta-xlarge Model with a multiple choice classification head on top (a linear layer on top of the pooled
output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaXLForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaXLForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForMultipleChoice.from_pretrained("xlm-roberta-xlarge")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
XLMRobertaXLForTokenClassification
class transformers.XLMRobertaXLForTokenClassification
<
source
>
(
config
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-Roberta-xlarge Model with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaXLForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaXLForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForTokenClassification.from_pretrained("xlm-roberta-xlarge")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
XLMRobertaXLForQuestionAnswering
class transformers.XLMRobertaXLForQuestionAnswering
<
source
>
(
config
)
Parameters
config (XLMRobertaXLConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM-Roberta-xlarge Model with a span classification head on top for extractive question-answering tasks like SQuAD
(a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a PyTorch torch.nn.Module
subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to
general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMRobertaXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMRobertaXLForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMRobertaXLForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForQuestionAnswering.from_pretrained("xlm-roberta-xlarge")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←XLM-RoBERTa
XLM-V→
XLM-RoBERTa-XL
Overview
Documentation resources
XLMRobertaXLConfig
XLMRobertaXLModel
XLMRobertaXLForCausalLM
XLMRobertaXLForMaskedLM
XLMRobertaXLForSequenceClassification
XLMRobertaXLForMultipleChoice
XLMRobertaXLForTokenClassification
XLMRobertaXLForQuestionAnswering
|
VisualBERT
Overview
The VisualBERT model was proposed in VisualBERT: A Simple and Performant Baseline for Vision and Language by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
VisualBERT is a neural network trained on a variety of (image, text) pairs.
The abstract from the paper is the following:
We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks.
VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an
associated input image with self-attention. We further propose two visually-grounded language model objectives for
pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2,
and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly
simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any
explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between
verbs and image regions corresponding to their arguments.
Tips:
Most of the checkpoints provided work with the VisualBertForPreTraining configuration. Other
checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA (‘visualbert-vqa’), VCR
(‘visualbert-vcr’), NLVR2 (‘visualbert-nlvr2’). Hence, if you are not working on these downstream tasks, it is
recommended that you use the pretrained checkpoints.
For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints.
We do not provide the detector and its weights as a part of the package, but it will be available in the research
projects, and the states can be loaded directly into the detector provided.
Usage
VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice,
visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare
embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical
dimension.
To feed images to the model, each image is passed through a pre-trained object detector and the regions and the
bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained
CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding
layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set
appropriately for the textual and visual parts.
The BertTokenizer is used to encode the text. A custom detector/image processor must be used
to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models:
VisualBERT VQA demo notebook : This notebook
contains an example on VisualBERT VQA.
Generate Embeddings for VisualBERT (Colab Notebook) : This notebook contains
an example on how to generate visual embeddings.
The following example shows how to get the last hidden state using VisualBertModel:
Copied
import torch
from transformers import BertTokenizer, VisualBertModel
model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
inputs = tokenizer("What is the man eating?", return_tensors="pt")
# this is a custom function that returns the visual embeddings given the image path
visual_embeds = get_visual_embeddings(image_path)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
... {
... "visual_embeds": visual_embeds,
... "visual_token_type_ids": visual_token_type_ids,
... "visual_attention_mask": visual_attention_mask,
... }
... )
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
This model was contributed by gchhablani. The original code can be found here.
VisualBertConfig
class transformers.VisualBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
visual_embedding_dim = 512
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
bypass_transformer = False
special_visual_initialize = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the VisualBERT model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling VisualBertModel. Vocabulary size of the model. Defines the
different tokens that can be represented by the inputs_ids passed to the forward method of
VisualBertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
visual_embedding_dim (int, optional, defaults to 512) —
Dimensionality of the visual embeddings to be passed to the model.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling VisualBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
bypass_transformer (bool, optional, defaults to False) —
Whether or not the model should bypass the transformer for the visual embeddings. If set to True, the
model directly concatenates the visual embeddings from VisualBertEmbeddings with text output from
transformers, and then pass it to a self-attention layer.
special_visual_initialize (bool, optional, defaults to True) —
Whether or not the visual token type and position type embedding weights should be initialized the same as
the textual token type and positive type embeddings. When set to True, the weights of the textual token
type and position type embeddings are copied to the respective visual embedding layers.
This is the configuration class to store the configuration of a VisualBertModel. It is used to instantiate an
VisualBERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the VisualBERT
uclanlp/visualbert-vqa-coco-pre architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import VisualBertConfig, VisualBertModel
# Initializing a VisualBERT visualbert-vqa-coco-pre style configuration
configuration = VisualBertConfig.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
# Initializing a model (with random weights) from the visualbert-vqa-coco-pre style configuration
model = VisualBertModel(configuration)
# Accessing the model configuration
configuration = model.config
VisualBertModel
class transformers.VisualBertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (VisualBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare VisualBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
visual_embeds: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.LongTensor] = None
visual_token_type_ids: typing.Optional[torch.LongTensor] = None
image_text_alignment: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
visual_embeds (torch.FloatTensor of shape (batch_size, visual_seq_length, visual_embedding_dim), optional) —
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (torch.FloatTensor of shape (batch_size, visual_seq_length), optional) —
Mask to avoid performing attention on visual embeddings. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_token_type_ids (torch.LongTensor of shape (batch_size, visual_seq_length), optional) —
Segment token indices to indicate different portions of the visual embeds.
What are token type IDs? The authors of VisualBERT set the
visual_token_type_ids to 1 for all tokens.
image_text_alignment (torch.LongTensor of shape (batch_size, visual_seq_length, alignment_number), optional) —
Image-Text alignment uses to decide the position IDs of the visual embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VisualBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VisualBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
# Assumption: *get_visual_embeddings(image)* gets the visual embeddings of the image.
from transformers import AutoTokenizer, VisualBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
inputs = tokenizer("The capital of France is Paris.", return_tensors="pt")
visual_embeds = get_visual_embeddings(image).unsqueeze(0)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
VisualBertForPreTraining
class transformers.VisualBertForPreTraining
<
source
>
(
config
)
Parameters
config (VisualBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VisualBert Model with two heads on top as done during the pretraining: a masked language modeling head and a
sentence-image prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
visual_embeds: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.LongTensor] = None
visual_token_type_ids: typing.Optional[torch.LongTensor] = None
image_text_alignment: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
sentence_image_labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.models.visual_bert.modeling_visual_bert.VisualBertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
visual_embeds (torch.FloatTensor of shape (batch_size, visual_seq_length, visual_embedding_dim), optional) —
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (torch.FloatTensor of shape (batch_size, visual_seq_length), optional) —
Mask to avoid performing attention on visual embeddings. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_token_type_ids (torch.LongTensor of shape (batch_size, visual_seq_length), optional) —
Segment token indices to indicate different portions of the visual embeds.
What are token type IDs? The authors of VisualBERT set the
visual_token_type_ids to 1 for all tokens.
image_text_alignment (torch.LongTensor of shape (batch_size, visual_seq_length, alignment_number), optional) —
Image-Text alignment uses to decide the position IDs of the visual embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, total_sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
sentence_image_labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sentence-image prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a matching pair of sequence A for the given image,
1 indicates sequence B is a random sequence w.r.t A for the given image.
Returns
transformers.models.visual_bert.modeling_visual_bert.VisualBertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.visual_bert.modeling_visual_bert.VisualBertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VisualBertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the sentence-image prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the sentence-image prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VisualBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
# Assumption: *get_visual_embeddings(image)* gets the visual embeddings of the image in the batch.
from transformers import AutoTokenizer, VisualBertForPreTraining
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = VisualBertForPreTraining.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
visual_embeds = get_visual_embeddings(image).unsqueeze(0)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
max_length = inputs["input_ids"].shape[-1] + visual_embeds.shape[-2]
labels = tokenizer(
"The capital of France is Paris.", return_tensors="pt", padding="max_length", max_length=max_length
)["input_ids"]
sentence_image_labels = torch.tensor(1).unsqueeze(0) # Batch_size
outputs = model(**inputs, labels=labels, sentence_image_labels=sentence_image_labels)
loss = outputs.loss
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
VisualBertForQuestionAnswering
class transformers.VisualBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (VisualBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VisualBert Model with a classification/regression head on top (a dropout and a linear layer on top of the pooled
output) for VQA.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
visual_embeds: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.LongTensor] = None
visual_token_type_ids: typing.Optional[torch.LongTensor] = None
image_text_alignment: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
visual_embeds (torch.FloatTensor of shape (batch_size, visual_seq_length, visual_embedding_dim), optional) —
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (torch.FloatTensor of shape (batch_size, visual_seq_length), optional) —
Mask to avoid performing attention on visual embeddings. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_token_type_ids (torch.LongTensor of shape (batch_size, visual_seq_length), optional) —
Segment token indices to indicate different portions of the visual embeds.
What are token type IDs? The authors of VisualBERT set the
visual_token_type_ids to 1 for all tokens.
image_text_alignment (torch.LongTensor of shape (batch_size, visual_seq_length, alignment_number), optional) —
Image-Text alignment uses to decide the position IDs of the visual embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, total_sequence_length), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. A KLDivLoss is computed between the labels and the returned logits.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VisualBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VisualBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
# Assumption: *get_visual_embeddings(image)* gets the visual embeddings of the image in the batch.
from transformers import AutoTokenizer, VisualBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = VisualBertForQuestionAnswering.from_pretrained("uclanlp/visualbert-vqa")
text = "Who is eating the apple?"
inputs = tokenizer(text, return_tensors="pt")
visual_embeds = get_visual_embeddings(image).unsqueeze(0)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
labels = torch.tensor([[0.0, 1.0]]).unsqueeze(0) # Batch size 1, Num labels 2
outputs = model(**inputs, labels=labels)
loss = outputs.loss
scores = outputs.logits
VisualBertForMultipleChoice
class transformers.VisualBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (VisualBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VisualBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for VCR tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
visual_embeds: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.LongTensor] = None
visual_token_type_ids: typing.Optional[torch.LongTensor] = None
image_text_alignment: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
visual_embeds (torch.FloatTensor of shape (batch_size, visual_seq_length, visual_embedding_dim), optional) —
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (torch.FloatTensor of shape (batch_size, visual_seq_length), optional) —
Mask to avoid performing attention on visual embeddings. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_token_type_ids (torch.LongTensor of shape (batch_size, visual_seq_length), optional) —
Segment token indices to indicate different portions of the visual embeds.
What are token type IDs? The authors of VisualBERT set the
visual_token_type_ids to 1 for all tokens.
image_text_alignment (torch.LongTensor of shape (batch_size, visual_seq_length, alignment_number), optional) —
Image-Text alignment uses to decide the position IDs of the visual embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VisualBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VisualBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
# Assumption: *get_visual_embeddings(image)* gets the visual embeddings of the image in the batch.
from transformers import AutoTokenizer, VisualBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = VisualBertForMultipleChoice.from_pretrained("uclanlp/visualbert-vcr")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
visual_embeds = get_visual_embeddings(image)
# (batch_size, num_choices, visual_seq_length, visual_embedding_dim)
visual_embeds = visual_embeds.expand(1, 2, *visual_embeds.shape)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors="pt", padding=True)
# batch size is 1
inputs_dict = {k: v.unsqueeze(0) for k, v in encoding.items()}
inputs_dict.update(
{
"visual_embeds": visual_embeds,
"visual_attention_mask": visual_attention_mask,
"visual_token_type_ids": visual_token_type_ids,
"labels": labels,
}
)
outputs = model(**inputs_dict)
loss = outputs.loss
logits = outputs.logits
VisualBertForVisualReasoning
class transformers.VisualBertForVisualReasoning
<
source
>
(
config
)
Parameters
config (VisualBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VisualBert Model with a sequence classification head on top (a dropout and a linear layer on top of the pooled
output) for Visual Reasoning e.g. for NLVR task.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
visual_embeds: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.LongTensor] = None
visual_token_type_ids: typing.Optional[torch.LongTensor] = None
image_text_alignment: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
visual_embeds (torch.FloatTensor of shape (batch_size, visual_seq_length, visual_embedding_dim), optional) —
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (torch.FloatTensor of shape (batch_size, visual_seq_length), optional) —
Mask to avoid performing attention on visual embeddings. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_token_type_ids (torch.LongTensor of shape (batch_size, visual_seq_length), optional) —
Segment token indices to indicate different portions of the visual embeds.
What are token type IDs? The authors of VisualBERT set the
visual_token_type_ids to 1 for all tokens.
image_text_alignment (torch.LongTensor of shape (batch_size, visual_seq_length, alignment_number), optional) —
Image-Text alignment uses to decide the position IDs of the visual embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. A classification loss is computed (Cross-Entropy) against these labels.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VisualBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VisualBertForVisualReasoning forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
# Assumption: *get_visual_embeddings(image)* gets the visual embeddings of the image in the batch.
from transformers import AutoTokenizer, VisualBertForVisualReasoning
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = VisualBertForVisualReasoning.from_pretrained("uclanlp/visualbert-nlvr2")
text = "Who is eating the apple?"
inputs = tokenizer(text, return_tensors="pt")
visual_embeds = get_visual_embeddings(image).unsqueeze(0)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
labels = torch.tensor(1).unsqueeze(0) # Batch size 1, Num choices 2
outputs = model(**inputs, labels=labels)
loss = outputs.loss
scores = outputs.logits
VisualBertForRegionToPhraseAlignment
class transformers.VisualBertForRegionToPhraseAlignment
<
source
>
(
config
)
Parameters
config (VisualBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VisualBert Model with a Masked Language Modeling head and an attention layer on top for Region-to-Phrase Alignment
e.g. for Flickr30 Entities task.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
visual_embeds: typing.Optional[torch.FloatTensor] = None
visual_attention_mask: typing.Optional[torch.LongTensor] = None
visual_token_type_ids: typing.Optional[torch.LongTensor] = None
image_text_alignment: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
region_to_phrase_position: typing.Optional[torch.LongTensor] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
visual_embeds (torch.FloatTensor of shape (batch_size, visual_seq_length, visual_embedding_dim), optional) —
The embedded representation of the visual inputs, generally derived using using an object detector.
visual_attention_mask (torch.FloatTensor of shape (batch_size, visual_seq_length), optional) —
Mask to avoid performing attention on visual embeddings. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
visual_token_type_ids (torch.LongTensor of shape (batch_size, visual_seq_length), optional) —
Segment token indices to indicate different portions of the visual embeds.
What are token type IDs? The authors of VisualBERT set the
visual_token_type_ids to 1 for all tokens.
image_text_alignment (torch.LongTensor of shape (batch_size, visual_seq_length, alignment_number), optional) —
Image-Text alignment uses to decide the position IDs of the visual embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
region_to_phrase_position (torch.LongTensor of shape (batch_size, total_sequence_length), optional) —
The positions depicting the position of the image embedding corresponding to the textual tokens.
labels (torch.LongTensor of shape (batch_size, total_sequence_length, visual_sequence_length), optional) —
Labels for computing the masked language modeling loss. KLDivLoss is computed against these labels and the
outputs from the attention layer.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VisualBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The VisualBertForRegionToPhraseAlignment forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
# Assumption: *get_visual_embeddings(image)* gets the visual embeddings of the image in the batch.
from transformers import AutoTokenizer, VisualBertForRegionToPhraseAlignment
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = VisualBertForRegionToPhraseAlignment.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
text = "Who is eating the apple?"
inputs = tokenizer(text, return_tensors="pt")
visual_embeds = get_visual_embeddings(image).unsqueeze(0)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
region_to_phrase_position = torch.ones((1, inputs["input_ids"].shape[-1] + visual_embeds.shape[-2]))
inputs.update(
{
"region_to_phrase_position": region_to_phrase_position,
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
labels = torch.ones(
(1, inputs["input_ids"].shape[-1] + visual_embeds.shape[-2], visual_embeds.shape[-2])
) # Batch size 1
outputs = model(**inputs, labels=labels)
loss = outputs.loss
scores = outputs.logits
←Vision Text Dual Encoder
X-CLIP→
VisualBERT
Overview
Usage
VisualBertConfig
VisualBertModel
VisualBertForPreTraining
VisualBertForQuestionAnswering
VisualBertForMultipleChoice
VisualBertForVisualReasoning
VisualBertForRegionToPhraseAlignment
|
XLNet
Overview
The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov,
Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn
bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization
order.
The abstract from the paper is the following:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves
better performance than pretraining approaches based on autoregressive language modeling. However, relying on
corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a
pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive
pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all
permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into
pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large
margin, including question answering, natural language inference, sentiment analysis, and document ranking.
Tips:
The specific attention pattern can be controlled at training and test time using the perm_mask input.
Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained
using only a sub-set of the output tokens as target which are selected with the target_mapping input.
To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and
target_mapping inputs to control the attention span and outputs (see examples in
examples/pytorch/text-generation/run_generation.py)
XLNet is one of the few models that has no sequence length limit.
XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length.
XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies.
This model was contributed by thomwolf. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Multiple choice task guide
XLNetConfig
class transformers.XLNetConfig
<
source
>
(
vocab_size = 32000
d_model = 1024
n_layer = 24
n_head = 16
d_inner = 4096
ff_activation = 'gelu'
untie_r = True
attn_type = 'bi'
initializer_range = 0.02
layer_norm_eps = 1e-12
dropout = 0.1
mem_len = 512
reuse_len = None
use_mems_eval = True
use_mems_train = False
bi_data = False
clamp_len = -1
same_length = False
summary_type = 'last'
summary_use_proj = True
summary_activation = 'tanh'
summary_last_dropout = 0.1
start_n_top = 5
end_n_top = 5
pad_token_id = 5
bos_token_id = 1
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32000) —
Vocabulary size of the XLNet model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling XLNetModel or TFXLNetModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the encoder layers and the pooler layer.
n_layer (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
d_inner (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
ff_activation (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the If string, "gelu", "relu", "silu" and
"gelu_new" are supported.
untie_r (bool, optional, defaults to True) —
Whether or not to untie relative position biases
attn_type (str, optional, defaults to "bi") —
The attention type used by the model. Set "bi" for XLNet, "uni" for Transformer-XL.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
mem_len (int or None, optional) —
The number of tokens to cache. The key/value pairs that have already been pre-computed in a previous
forward pass won’t be re-computed. See the
quickstart for more information.
reuse_len (int, optional) —
The number of tokens in the current batch to be cached and reused in the future.
bi_data (bool, optional, defaults to False) —
Whether or not to use bidirectional input pipeline. Usually set to True during pretraining and False
during finetuning.
clamp_len (int, optional, defaults to -1) —
Clamp all relative distances larger than clamp_len. Setting this attribute to -1 means no clamping.
same_length (bool, optional, defaults to False) —
Whether or not to use the same attention length for each token.
summary_type (str, optional, defaults to “last”) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Has to be one of the following options:
"last": Take the last token hidden state (like XLNet).
"first": Take the first token hidden state (like BERT).
"mean": Take the mean of all tokens hidden states.
"cls_index": Supply a Tensor of classification token position (like GPT/GPT-2).
"attn": Not implemented now, use multi-head attention.
summary_use_proj (bool, optional, defaults to True) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Whether or not to add a projection after the vector extraction.
summary_activation (str, optional) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Pass "tanh" for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (boo, optional, defaults to True) —
Used in the sequence classification and multiple choice models.
Whether the projection outputs should have config.num_labels or config.hidden_size classes.
summary_last_dropout (float, optional, defaults to 0.1) —
Used in the sequence classification and multiple choice models.
The dropout ratio to be used after the projection and activation.
start_n_top (int, optional, defaults to 5) —
Used in the SQuAD evaluation script.
end_n_top (int, optional, defaults to 5) —
Used in the SQuAD evaluation script.
use_mems_eval (bool, optional, defaults to True) —
Whether or not the model should make use of the recurrent memory mechanism in evaluation mode.
use_mems_train (bool, optional, defaults to False) —
Whether or not the model should make use of the recurrent memory mechanism in train mode.
For pretraining, it is recommended to set use_mems_train to True. For fine-tuning, it is recommended to
set use_mems_train to False as discussed
here. If use_mems_train is set to
True, one has to make sure that the train batches are correctly pre-processed, e.g. batch_1 = [[This line is], [This is the]] and batch_2 = [[ the first line], [ second line]] and that all batches are of
equal size.
This is the configuration class to store the configuration of a XLNetModel or a TFXLNetModel. It is used to
instantiate a XLNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
xlnet-large-cased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import XLNetConfig, XLNetModel
# Initializing a XLNet configuration
configuration = XLNetConfig()
# Initializing a model (with random weights) from the configuration
model = XLNetModel(configuration)
# Accessing the model configuration
configuration = model.config
XLNetTokenizer
class transformers.XLNetTokenizer
<
source
>
(
vocab_file
do_lower_case = False
remove_space = True
keep_accents = False
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
sep_token = '<sep>'
pad_token = '<pad>'
cls_token = '<cls>'
mask_token = '<mask>'
additional_special_tokens = ['<eop>', '<eod>']
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) —
Whether to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) —
Whether to keep accents when tokenizing.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "<sep>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "<cls>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<eop>", "<eod>"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct an XLNet tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has the following format:
single sequence: X <sep> <cls>
pair of sequences: A <sep> B <sep> <cls>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
XLNetTokenizerFast
class transformers.XLNetTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = False
remove_space = True
keep_accents = False
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
sep_token = '<sep>'
pad_token = '<pad>'
cls_token = '<cls>'
mask_token = '<mask>'
additional_special_tokens = ['<eop>', '<eod>']
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) —
Whether to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) —
Whether to keep accents when tokenizing.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "<sep>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "<cls>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<eop>", "<eod>"]) —
Additional special tokens used by the tokenizer.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct a “fast” XLNet tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLNet sequence has the following format:
single sequence: X <sep> <cls>
pair of sequences: A <sep> B <sep> <cls>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
XLNet specific outputs
class transformers.models.xlnet.modeling_xlnet.XLNetModelOutput
<
source
>
(
last_hidden_state: FloatTensor
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, num_predict, hidden_size)) —
Sequence of hidden-states at the last layer of the model.
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetModel.
class transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, num_predict, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetLMHeadModel.
class transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetForSequenceClassification.
class transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) —
num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetForMultipleChoice.
class transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) —
Classification scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetForTokenClassificationOutput.
class transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
start_logits: FloatTensor = None
end_logits: FloatTensor = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length,)) —
Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length,)) —
Span-end scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetForQuestionAnsweringSimple.
class transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
start_top_log_probs: typing.Optional[torch.FloatTensor] = None
start_top_index: typing.Optional[torch.LongTensor] = None
end_top_log_probs: typing.Optional[torch.FloatTensor] = None
end_top_index: typing.Optional[torch.LongTensor] = None
cls_logits: typing.Optional[torch.FloatTensor] = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided) —
Classification loss as the sum of start token, end token (and is_impossible if provided) classification
losses.
start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) —
Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) —
Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) —
Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities
(beam-search).
end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) —
Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided) —
Log probabilities for the is_impossible label of the answers.
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of XLNetForQuestionAnswering.
class transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput
<
source
>
(
last_hidden_state: tf.Tensor = None
mems: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, num_predict, hidden_size)) —
Sequence of hidden-states at the last layer of the model.
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFXLNetModel.
class transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
mems: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, num_predict, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFXLNetLMHeadModel.
class transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
mems: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when label is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFXLNetForSequenceClassification.
class transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
mems: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) —
num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFXLNetForMultipleChoice.
class transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
mems: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) —
Classification scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFXLNetForTokenClassificationOutput.
class transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput
<
source
>
(
loss: tf.Tensor | None = None
start_logits: tf.Tensor = None
end_logits: tf.Tensor = None
mems: List[tf.Tensor] | None = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length,)) —
Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length,)) —
Span-end scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFXLNetForQuestionAnsweringSimple.
XLNetModel
class transformers.XLNetModel
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLNet Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.xlnet.modeling_xlnet.XLNetModelOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_predict, hidden_size)) — Sequence of hidden-states at the last layer of the model.
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLNetModel
import torch
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetModel.from_pretrained("xlnet-base-cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XLNetLMHeadModel
class transformers.XLNetLMHeadModel
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, num_predict), optional) —
Labels for masked language modeling. num_predict corresponds to target_mapping.shape[1]. If
target_mapping is None, then num_predict corresponds to sequence_length.
The labels should correspond to the masked input words that should be predicted and depends on
target_mapping. Note in order to perform standard auto-regressive language modeling a token has
to be added to the input_ids (see the prepare_inputs_for_generation function and examples below)
Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored, the loss
is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided)
Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, num_predict, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, XLNetLMHeadModel
import torch
tokenizer = AutoTokenizer.from_pretrained("xlnet-large-cased")
model = XLNetLMHeadModel.from_pretrained("xlnet-large-cased")
# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = torch.tensor(
... tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False)
... ).unsqueeze(
... 0
... ) # We will predict the masked token
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token
target_mapping = torch.zeros(
... (1, 1, input_ids.shape[1]), dtype=torch.float
... ) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[
... 0, 0, -1
... ] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
next_token_logits = outputs[
... 0
... ] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
# The same way can the XLNetLMHeadModel be used to be trained by standard auto-regressive language modeling.
input_ids = torch.tensor(
... tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False)
... ).unsqueeze(
... 0
... ) # We will predict the masked token
labels = torch.tensor(tokenizer.encode("cute", add_special_tokens=False)).unsqueeze(0)
assert labels.shape[0] == 1, "only one word will be predicted"
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[
... :, :, -1
... ] = 1.0 # Previous tokens don't see last token as is done in standard auto-regressive lm training
target_mapping = torch.zeros(
... (1, 1, input_ids.shape[1]), dtype=torch.float
... ) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[
... 0, 0, -1
... ] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels)
loss = outputs.loss
next_token_logits = (
... outputs.logits
... ) # Logits have shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
XLNetForSequenceClassification
class transformers.XLNetForSequenceClassification
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g.
for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLNetForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLNetForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLNetForSequenceClassification.from_pretrained(
... "xlnet-base-cased", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
XLNetForMultipleChoice
class transformers.XLNetForMultipleChoice
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RACE/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, num_choices, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLNetForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetForMultipleChoice.from_pretrained("xlnet-base-cased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
XLNetForTokenClassification
class transformers.XLNetForTokenClassification
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (see input_ids above)
Returns
transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLNetForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetForTokenClassification.from_pretrained("xlnet-base-cased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
XLNetForQuestionAnsweringSimple
class transformers.XLNetForQuestionAnsweringSimple
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length,)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length,)) — Span-end scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetForQuestionAnsweringSimple forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLNetForQuestionAnsweringSimple
import torch
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
XLNetForQuestionAnswering
class transformers.XLNetForQuestionAnswering
<
source
>
(
config
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
mems: typing.Optional[torch.Tensor] = None
perm_mask: typing.Optional[torch.Tensor] = None
target_mapping: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
input_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
is_impossible: typing.Optional[torch.Tensor] = None
cls_index: typing.Optional[torch.Tensor] = None
p_mask: typing.Optional[torch.Tensor] = None
use_mems: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
is_impossible (torch.LongTensor of shape (batch_size,), optional) —
Labels whether a question has an answer or no answer (SQuAD 2.0)
cls_index (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the classification token to use as input for computing plausibility of the
answer.
p_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Optional mask of tokens which can’t be in answers (e.g. [CLS], [PAD], …). 1.0 means token should be
masked. 0.0 mean token is not masked.
Returns
transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput or tuple(torch.FloatTensor)
A transformers.models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification
losses.
start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities
(beam-search).
end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the is_impossible label of the answers.
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLNetForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLNetForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = XLNetForQuestionAnswering.from_pretrained("xlnet-base-cased")
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(
... 0
... ) # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
TFXLNetModel
class transformers.TFXLNetModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLNet Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
mems: np.ndarray | tf.Tensor | None = None
perm_mask: np.ndarray | tf.Tensor | None = None
target_mapping: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
input_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_mems: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput or tuple(tf.Tensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput or tuple(tf.Tensor)
A transformers.models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLNetConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, num_predict, hidden_size)) — Sequence of hidden-states at the last layer of the model.
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLNetModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = TFXLNetModel.from_pretrained("xlnet-base-cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFXLNetLMHeadModel
class transformers.TFXLNetLMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
mems: np.ndarray | tf.Tensor | None = None
perm_mask: np.ndarray | tf.Tensor | None = None
target_mapping: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
input_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_mems: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput or tuple(tf.Tensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput or tuple(tf.Tensor)
A transformers.models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLNetConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided)
Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, num_predict, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLNetLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
import numpy as np
from transformers import AutoTokenizer, TFXLNetLMHeadModel
tokenizer = AutoTokenizer.from_pretrained("xlnet-large-cased")
model = TFXLNetLMHeadModel.from_pretrained("xlnet-large-cased")
# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = tf.constant(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=True))[
... None, :
... ] # We will predict the masked token
perm_mask = np.zeros((1, input_ids.shape[1], input_ids.shape[1]))
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token
target_mapping = np.zeros(
... (1, 1, input_ids.shape[1])
... ) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[
... 0, 0, -1
... ] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(
... input_ids,
... perm_mask=tf.constant(perm_mask, dtype=tf.float32),
... target_mapping=tf.constant(target_mapping, dtype=tf.float32),
... )
next_token_logits = outputs[
... 0
... ] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
TFXLNetForSequenceClassification
class transformers.TFXLNetForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g.
for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
mems: np.ndarray | tf.Tensor | None = None
perm_mask: np.ndarray | tf.Tensor | None = None
target_mapping: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
input_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_mems: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput or tuple(tf.Tensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput or tuple(tf.Tensor)
A transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLNetConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLNetForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLNetForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = TFXLNetForSequenceClassification.from_pretrained("xlnet-base-cased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFXLNetForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFLNetForMultipleChoice
class transformers.TFXLNetForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNET Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
input_mask: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
mems: np.ndarray | tf.Tensor | None = None
perm_mask: np.ndarray | tf.Tensor | None = None
target_mapping: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_mems: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput or tuple(tf.Tensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, num_choices, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput or tuple(tf.Tensor)
A transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLNetConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLNetForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLNetForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = TFXLNetForMultipleChoice.from_pretrained("xlnet-base-cased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFXLNetForTokenClassification
class transformers.TFXLNetForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
mems: np.ndarray | tf.Tensor | None = None
perm_mask: np.ndarray | tf.Tensor | None = None
target_mapping: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
input_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_mems: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput or tuple(tf.Tensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput or tuple(tf.Tensor)
A transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLNetConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLNetForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLNetForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = TFXLNetForTokenClassification.from_pretrained("xlnet-base-cased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFXLNetForQuestionAnsweringSimple
class transformers.TFXLNetForQuestionAnsweringSimple
<
source
>
(
*args
**kwargs
)
Parameters
config (XLNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
mems: np.ndarray | tf.Tensor | None = None
perm_mask: np.ndarray | tf.Tensor | None = None
target_mapping: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
input_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_mems: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput or tuple(tf.Tensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (see mems output below) . Can be used to speed up sequential
decoding. The token ids which have their past given to this model should not be passed as input_ids as
they have already been computed.
use_mems has to be set to True to make use of mems.
perm_mask (torch.FloatTensor of shape (batch_size, sequence_length, sequence_length), optional) —
Mask to indicate the attention pattern for each input token with values selected in [0, 1]:
if perm_mask[k, i, j] = 0, i attend to j in batch k;
if perm_mask[k, i, j] = 1, i does not attend to j in batch k.
If not set, each token attends to all the others (full bidirectional attention). Only used during
pretraining (to define factorization order) or for sequential decoding (generation).
target_mapping (torch.FloatTensor of shape (batch_size, num_predict, sequence_length), optional) —
Mask to indicate the output tokens to use. If target_mapping[k, i, j] = 1, the i-th predict in batch k is
on the j-th token. Only used during pretraining for partial prediction or for sequential decoding
(generation).
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
input_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Negative of attention_mask, i.e. with 0 for
real tokens and 1 for padding which is kept for compatibility with the original code base.
Mask values selected in [0, 1]:
1 for tokens that are masked,
0 for tokens that are not masked.
You can only uses one of input_mask and attention_mask.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput or tuple(tf.Tensor)
A transformers.models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLNetConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length,)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length,)) — Span-end scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states. Can be used (see mems input) to speed up sequential decoding. The
token ids which have their past given to this model should not be passed as input_ids as they have
already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLNetForQuestionAnsweringSimple forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLNetForQuestionAnsweringSimple
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
model = TFXLNetForQuestionAnsweringSimple.from_pretrained("xlnet-base-cased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←XLM-V
YOSO→
XLNet
Overview
Documentation resources
XLNetConfig
XLNetTokenizer
XLNetTokenizerFast
XLNet specific outputs
XLNetModel
XLNetLMHeadModel
XLNetForSequenceClassification
XLNetForMultipleChoice
XLNetForTokenClassification
XLNetForQuestionAnsweringSimple
XLNetForQuestionAnswering
TFXLNetModel
TFXLNetLMHeadModel
TFXLNetForSequenceClassification
TFLNetForMultipleChoice
TFXLNetForTokenClassification
TFXLNetForQuestionAnsweringSimple
|
XLS-R
Overview
The XLS-R model was proposed in XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman
Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
The abstract from the paper is the following:
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.
We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128
languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range
of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation
benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into
English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as
VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107
language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform
English-only pretraining when translating English speech into other languages, a setting which favors monolingual
pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.
Tips:
XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using
Wav2Vec2CTCTokenizer.
Relevant checkpoints can be found under https://huggingface.co/models?other=xls_r.
XLS-R’s architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2’s documentation page.
The original code can be found here.
←Whisper
XLSR-Wav2Vec2→
XLS-R
Overview
|
Vision Transformer (ViT)
Overview
The Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It’s the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
Tips:
Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found here.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be
used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard Transformer encoder.
As the Vision Transformer expects each image to be of the same size (resolution), one can use
ViTImageProcessor to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of
each checkpoint. For example, google/vit-base-patch16-224 refers to a base-sized architecture with patch
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-21k (a collection of
14 million images and 21k classes) only, or (2) also fine-tuned on ImageNet (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to
use a higher resolution than pre-training (Touvron et al., 2019), (Kolesnikov
et al., 2020). In order to fine-tune at higher resolution, the authors perform
2D interpolation of the pre-trained position embeddings, according to their location in the original image.
The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed
an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked
language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant
improvement of 2% to training from scratch, but still 4% behind supervised pre-training.
ViT architecture. Taken from the original paper.
Following the original Vision Transformer, some follow-up works have been made:
DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into ViTModel or
ViTForImageClassification. There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224,
facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should
use DeiTImageProcessor in order to prepare images for the model.
BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained
vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE.
DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using
the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting
objects, without having ever been trained to do so. DINO checkpoints can be found on the hub.
MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion
(75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms
supervised pre-training after fine-tuning.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Note that we converted the weights from Ross Wightman’s timm library, who already converted the weights from JAX to PyTorch. Credits
go to him!
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT.
Image Classification
ViTForImageClassification is supported by this example script and notebook.
A blog on fine-tuning ViTForImageClassification on a custom dataset can be found here.
More demo notebooks to fine-tune ViTForImageClassification can be found here.
Image classification task guide
Besides that:
ViTForMaskedImageModeling is supported by this example script.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTForImageClassification is supported by:
Image Classification
A blog post on how to Fine-Tune ViT for Image Classification with Hugging Face Transformers
A blog post on Image Classification with Hugging Face Transformers and Keras
A notebook on Fine-tuning for Image Classification with Hugging Face Transformers
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning
⚗️ Optimization
A blog post on how to Accelerate Vision Transformer (ViT) with Quantization using Optimum
⚡️ Inference
A notebook on Quick demo: Vision Transformer (ViT) by Google Brain
🚀 Deploy
A blog post on Deploying Tensorflow Vision Models in Hugging Face with TF Serving
A blog post on Deploying Hugging Face ViT on Vertex AI
A blog post on Deploying Hugging Face ViT on Kubernetes with TF Serving
ViTConfig
class transformers.ViTConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 224
patch_size = 16
num_channels = 3
qkv_bias = True
encoder_stride = 16
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
encoder_stride (int, optional, defaults to 16) —
Factor to increase the spatial resolution by in the decoder head for masked image modeling.
This is the configuration class to store the configuration of a ViTModel. It is used to instantiate an ViT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the ViT
google/vit-base-patch16-224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ViTConfig, ViTModel
# Initializing a ViT vit-base-patch16-224 style configuration
configuration = ViTConfig()
# Initializing a model (with random weights) from the vit-base-patch16-224 style configuration
model = ViTModel(configuration)
# Accessing the model configuration
configuration = model.config
ViTFeatureExtractor
class transformers.ViTFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
ViTImageProcessor
class transformers.ViTImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified (size["height"], size["width"]). Can be overridden by the do_resize parameter in the preprocess method.
size (dict, optional, defaults to {"height" -- 224, "width": 224}):
Size of the output image after resizing. Can be overridden by the size parameter in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a ViT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Dictionary in the format {"height": h, "width": w} specifying the size of the output image after
resizing.
resample (PILImageResampling filter, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use if do_normalize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
ViTModel
class transformers.ViTModel
<
source
>
(
config: ViTConfig
add_pooling_layer: bool = True
use_mask_token: bool = False
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ViTModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
model = ViTModel.from_pretrained("google/vit-base-patch16-224-in21k")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
ViTForMaskedImageModeling
class transformers.ViTForMaskedImageModeling
<
source
>
(
config: ViTConfig
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViT Model with a decoder on top for masked image modeling, as proposed in SimMIM.
Note that we provide a script to pre-train this model on custom data in our examples
directory.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedImageModelingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_outputs.MaskedImageModelingOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedImageModelingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Reconstruction loss.
reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed / completed images.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or
when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when
config.output_attentions=True):
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The ViTForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ViTForMaskedImageModeling
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
model = ViTForMaskedImageModeling.from_pretrained("google/vit-base-patch16-224-in21k")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
list(reconstructed_pixel_values.shape)
[1, 3, 224, 224]
ViTForImageClassification
class transformers.ViTForImageClassification
<
source
>
(
config: ViTConfig
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
Note that it’s possible to fine-tune ViT on higher resolution images than the ones it has been trained on, by
setting interpolate_pos_encoding to True in the forward of the model. This will interpolate the pre-trained
position embeddings to the higher resolution.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ViTForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
Egyptian cat
TFViTModel
class transformers.TFViTModel
<
source
>
(
*args
**kwargs
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
head_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
interpolate_pos_encoding: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ViTConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFViTModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
model = TFViTModel.from_pretrained("google/vit-base-patch16-224-in21k")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
TFViTForImageClassification
class transformers.TFViTForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
Note that it’s possible to fine-tune ViT on higher resolution images than the ones it has been trained on, by
setting interpolate_pos_encoding to True in the forward of the model. This will interpolate the pre-trained
position embeddings to the higher resolution.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
head_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
interpolate_pos_encoding: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ViTConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFViTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFViTForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = TFViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
Egyptian cat
FlaxVitModel
class transformers.FlaxViTModel
<
source
>
(
config: ViTConfig
input_shape = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare ViT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.vit.configuration_vit.ViTConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxViTPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, FlaxViTModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
model = FlaxViTModel.from_pretrained("google/vit-base-patch16-224-in21k")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxViTForImageClassification
class transformers.FlaxViTForImageClassification
<
source
>
(
config: ViTConfig
input_shape = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (ViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
ViT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.vit.configuration_vit.ViTConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxViTPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FlaxViTForImageClassification
from PIL import Image
import jax
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = FlaxViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = jax.numpy.argmax(logits, axis=-1)
print("Predicted class:", model.config.id2label[predicted_class_idx.item()])
←VideoMAE
ViT Hybrid→
Vision Transformer (ViT)
Overview
Resources
Resources
ViTConfig
ViTFeatureExtractor
ViTImageProcessor
ViTModel
ViTForMaskedImageModeling
ViTForImageClassification
TFViTModel
TFViTForImageClassification
FlaxVitModel
FlaxViTForImageClassification
|
PLBart
DISCLAIMER: If you see something strange, file a Github Issue and assign
@gchhablani.
Overview of PLBart
The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task
on Java, Python and English.
According to the abstract
Code summarization and generation empower conversion between programming language (PL) and natural language (NL),
while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART,
a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks.
PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding.
Experiments on code summarization in the English language, code generation, and code translation in seven programming languages
show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program
repair, clone detection, and vulnerable code detection, demonstrate PLBART’s effectiveness in program understanding.
Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow
(e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels
even with limited annotations.
This model was contributed by gchhablani. The Authors’ code can be found here.
Training of PLBart
PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The
target text format is [tgt_lang_code] X [eos]. bos is never used.
However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to the paper to learn more about this.
In cases where the language code is needed, the regular call() will encode source text format
when you pass texts as the first argument or with the keyword argument text, and will encode target text format if
it’s passed with the text_target keyword argument.
Supervised training
Copied
from transformers import PLBartForConditionalGeneration, PLBartTokenizer
tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python")
example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
expected_translation_english = "Returns the maximum value of a b c."
inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt")
model(**inputs)
Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate Python to English using the uclanlp/plbart-python-en_XX model.
Copied
from transformers import PLBartForConditionalGeneration, PLBartTokenizer
tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX")
example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
inputs = tokenizer(example_python_phrase, return_tensors="pt")
model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX")
translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Returns the maximum value of a b c."
Documentation resources
Text classification task guide
Causal language modeling task guide
Translation task guide
Summarization task guide
PLBartConfig
class transformers.PLBartConfig
<
source
>
(
vocab_size = 50005
max_position_embeddings = 1024
encoder_layers = 6
encoder_ffn_dim = 3072
encoder_attention_heads = 12
decoder_layers = 6
decoder_ffn_dim = 3072
decoder_attention_heads = 12
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 768
dropout = 0.1
attention_dropout = 0.1
activation_dropout = 0.0
init_std = 0.02
classifier_dropout = 0.0
scale_embedding = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
forced_eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50005) —
Vocabulary size of the PLBART model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling PLBartModel.
d_model (int, optional, defaults to 768) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to True) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a PLBartModel. It is used to instantiate an
PLBART model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the PLBART
uclanlp/plbart-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import PLBartConfig, PLBartModel
# Initializing a PLBART uclanlp/plbart-base style configuration
configuration = PLBartConfig()
# Initializing a model (with random weights) from the uclanlp/plbart-base style configuration
model = PLBartModel(configuration)
# Accessing the model configuration
configuration = model.config
PLBartTokenizer
class transformers.PLBartTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
language_codes = 'base'
tokenizer_file = None
src_lang = None
tgt_lang = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
additional_special_tokens = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
src_lang (str, optional) —
A string representing the source language.
tgt_lang (str, optional) —
A string representing the target language.
bos_token (str, optional, defaults to "<s>") —
The start of sequence token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The cls token, which is a special token used as the first token for all tasks.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token(str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masking tasks. This
is only used in the "base" tokenizer type. For "multi" tokenizer, masking is never done for the
downstream tasks.
language_codes (str, optional, defaults to "base") —
What language codes to use. Should be one of "base" or "multi".
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct an PLBART tokenizer.
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos> for target language documents.
Examples:
Copied
from transformers import PLBartTokenizer
tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX")
example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
expected_translation_english = "Returns the maximum value of a b c."
inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An PLBART sequence has the following format, where X represents the sequence:
input_ids (for encoder) X [eos, src_lang_code]
decoder_input_ids: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
PLBartModel
class transformers.PLBartModel
<
source
>
(
config: PLBartConfig
)
Parameters
config (PLBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare PLBART Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.LongTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
PLBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask ( —
obj:torch.LongTensor of shape (batch_size, target_sequence_length), optional): Default behavior:
generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask ( —
obj:torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional): Mask to nullify
selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values ( —
obj:tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when
config.use_cache=True): Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple
having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional
tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds ( —
obj:torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally,
instead of passing input_ids you can choose to directly pass an embedded representation. This is useful
if you want more control over how to convert input_ids indices into associated vectors than the model’s
internal embedding lookup matrix.
decoder_inputs_embeds ( —
obj:torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional):
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PLBartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, PLBartModel
import torch
tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
model = PLBartModel.from_pretrained("uclanlp/plbart-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
PLBartForConditionalGeneration
class transformers.PLBartForConditionalGeneration
<
source
>
(
config: PLBartConfig
)
Parameters
config (PLBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The PLBART Model with a language modeling head. Can be used for code-to-text, text-to-code and code-to-code.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.LongTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
PLBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask ( —
obj:torch.LongTensor of shape (batch_size, target_sequence_length), optional): Default behavior:
generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask ( —
obj:torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional): Mask to nullify
selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values ( —
obj:tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when
config.use_cache=True): Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple
having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional
tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds ( —
obj:torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally,
instead of passing input_ids you can choose to directly pass an embedded representation. This is useful
if you want more control over how to convert input_ids indices into associated vectors than the model’s
internal embedding lookup matrix.
decoder_inputs_embeds ( —
obj:torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional):
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PLBartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Mask-filling example:
Copied
from transformers import AutoTokenizer, PLBartForConditionalGeneration
model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-base")
tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
# en_XX is the language symbol id <LID> for English
TXT = "<s> Is 0 the <mask> Fibonacci number ? </s> en_XX"
input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="pt").input_ids
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
['first', 'same', 'highest', 'result', 'number']
PLBartForSequenceClassification
class transformers.PLBartForSequenceClassification
<
source
>
(
config: PLBartConfig
**kwargs
)
Parameters
config (PLBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
PLBart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for code
classification.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs?
PLBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask ( —
obj:torch.LongTensor of shape (batch_size, target_sequence_length), optional): Default behavior:
generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask ( —
obj:torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional): Mask to nullify
selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values ( —
obj:tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when
config.use_cache=True): Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple
having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional
tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds ( —
obj:torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally,
instead of passing input_ids you can choose to directly pass an embedded representation. This is useful
if you want more control over how to convert input_ids indices into associated vectors than the model’s
internal embedding lookup matrix.
decoder_inputs_embeds ( —
obj:torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional):
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PLBartForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, PLBartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
model = PLBartForSequenceClassification.from_pretrained("uclanlp/plbart-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = PLBartForSequenceClassification.from_pretrained("uclanlp/plbart-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, PLBartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
model = PLBartForSequenceClassification.from_pretrained("uclanlp/plbart-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = PLBartForSequenceClassification.from_pretrained(
... "uclanlp/plbart-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
PLBartForCausalLM
class transformers.PLBartForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, PLBartForCausalLM
tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
model = PLBartForCausalLM.from_pretrained("uclanlp/plbart-base", add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
←PhoBERT
ProphetNet→
PLBart
Overview of PLBart
Training of PLBart
Documentation resources
PLBartConfig
PLBartTokenizer
PLBartModel
PLBartForConditionalGeneration
PLBartForSequenceClassification
PLBartForCausalLM
|
with the Perceiver is by checking the tutorial
notebooks.
Refer to the blog post if you want to fully understand how the model works and
is implemented in the library. Note that the models available in the library only showcase some examples of what you can do
with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection,
audio classification, video classification, etc.
Note:
Perceiver does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Documentation resources
Text classification task guide
Masked language modeling task guide
Image classification task guide
Perceiver specific outputs
class transformers.models.perceiver.modeling_perceiver.PerceiverModelOutput
<
source
>
(
logits: FloatTensor = None
last_hidden_state: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
logits (torch.FloatTensor of shape (batch_size, num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
Base class for Perceiver base model’s outputs, with potential hidden states, attentions and cross-attentions.
class transformers.models.perceiver.modeling_perceiver.PerceiverDecoderOutput
<
source
>
(
logits: FloatTensor = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
logits (torch.FloatTensor of shape (batch_size, num_labels)) —
Output of the basic decoder.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
Base class for Perceiver decoder outputs, with potential cross-attentions.
class transformers.models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_latents, num_latents). Attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
Base class for Perceiver’s masked language model outputs.
class transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
Base class for Perceiver’s outputs of sequence/image classification models, optical flow and multimodal
autoencoding.
PerceiverConfig
class transformers.PerceiverConfig
<
source
>
(
num_latents = 256
d_latents = 1280
d_model = 768
num_blocks = 1
num_self_attends_per_block = 26
num_self_attention_heads = 8
num_cross_attention_heads = 8
qk_channels = None
v_channels = None
cross_attention_shape_for_attention = 'kv'
self_attention_widening_factor = 1
cross_attention_widening_factor = 1
hidden_act = 'gelu'
attention_probs_dropout_prob = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-12
use_query_residual = True
vocab_size = 262
max_position_embeddings = 2048
image_size = 56
train_size = [368, 496]
num_frames = 16
audio_samples_per_frame = 1920
samples_per_patch = 16
output_shape = [1, 16, 224, 224]
**kwargs
)
Parameters
num_latents (int, optional, defaults to 256) —
The number of latents.
d_latents (int, optional, defaults to 1280) —
Dimension of the latent embeddings.
d_model (int, optional, defaults to 768) —
Dimension of the inputs. Should only be provided in case [PerceiverTextPreprocessor] is used or no
preprocessor is provided.
num_blocks (int, optional, defaults to 1) —
Number of blocks in the Transformer encoder.
num_self_attends_per_block (int, optional, defaults to 26) —
The number of self-attention layers per block.
num_self_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each self-attention layer in the Transformer encoder.
num_cross_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each cross-attention layer in the Transformer encoder.
qk_channels (int, optional) —
Dimension to project the queries + keys before applying attention in the cross-attention and self-attention
layers of the encoder. Will default to preserving the dimension of the queries if not specified.
v_channels (int, optional) —
Dimension to project the values before applying attention in the cross-attention and self-attention layers
of the encoder. Will default to preserving the dimension of the queries if not specified.
cross_attention_shape_for_attention (str, optional, defaults to 'kv') —
Dimension to use when downsampling the queries and keys in the cross-attention layer of the encoder.
self_attention_widening_factor (int, optional, defaults to 1) —
Dimension of the feed-forward layer in the cross-attention layer of the Transformer encoder.
cross_attention_widening_factor (int, optional, defaults to 1) —
Dimension of the feed-forward layer in the self-attention layers of the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
use_query_residual (float, optional, defaults to True) —
Whether to add a query residual in the cross-attention layer of the encoder.
vocab_size (int, optional, defaults to 262) —
Vocabulary size for the masked language modeling model.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that the masked language modeling model might ever be used with. Typically set
this to something large just in case (e.g., 512 or 1024 or 2048).
image_size (int, optional, defaults to 56) —
Size of the images after preprocessing, for PerceiverForImageClassificationLearned.
train_size (List[int], optional, defaults to [368, 496]) —
Training size of the images for the optical flow model.
num_frames (int, optional, defaults to 16) —
Number of video frames used for the multimodal autoencoding model.
audio_samples_per_frame (int, optional, defaults to 1920) —
Number of audio samples per frame for the multimodal autoencoding model.
samples_per_patch (int, optional, defaults to 16) —
Number of audio samples per patch when preprocessing the audio for the multimodal autoencoding model.
output_shape (List[int], optional, defaults to [1, 16, 224, 224]) —
Shape of the output (batch_size, num_frames, height, width) for the video decoder queries of the multimodal
autoencoding model. This excludes the channel dimension.
This is the configuration class to store the configuration of a PerceiverModel. It is used to instantiate an
Perceiver model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Perceiver
deepmind/language-perceiver architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import PerceiverModel, PerceiverConfig
# Initializing a Perceiver deepmind/language-perceiver style configuration
configuration = PerceiverConfig()
# Initializing a model from the deepmind/language-perceiver style configuration
model = PerceiverModel(configuration)
# Accessing the model configuration
configuration = model.config
PerceiverTokenizer
class transformers.PerceiverTokenizer
<
source
>
(
pad_token = '[PAD]'
bos_token = '[BOS]'
eos_token = '[EOS]'
mask_token = '[MASK]'
cls_token = '[CLS]'
sep_token = '[SEP]'
model_max_length = 2048
**kwargs
)
Parameters
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
bos_token (str, optional, defaults to "[BOS]") —
The BOS token (reserved in the vocab, but not actually used).
eos_token (str, optional, defaults to "[EOS]") —
The end of sequence token (reserved in the vocab, but not actually used).
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
mask_token (str, optional, defaults to "[MASK]") —
The MASK token, useful for masked language modeling.
cls_token (str, optional, defaults to "[CLS]") —
The CLS token (reserved in the vocab, but not actually used).
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from two sequences.
Construct a Perceiver tokenizer. The Perceiver simply uses raw bytes utf-8 encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
is_split_into_words: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
PerceiverFeatureExtractor
class transformers.PerceiverFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
PerceiverImageProcessor
class transformers.PerceiverImageProcessor
<
source
>
(
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_center_crop (bool, optional, defaults to True) —
Whether or not to center crop the image. If the input size if smaller than crop_size along any edge, the
image will be padded with zeros and then center cropped. Can be overridden by the do_center_crop
parameter in the preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 256, "width": 256}):
Desired output size when applying center-cropping. Can be overridden by the crop_size parameter in the
preprocess method.
do_resize (bool, optional, defaults to True) —
Whether to resize the image to (size["height"], size["width"]). Can be overridden by the do_resize
parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after resizing. Can be overridden by the size parameter in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Defines the resampling filter to use if resizing the image. Can be overridden by the resample parameter
in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Defines the scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter
in the preprocess method.
do_normalize —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a Perceiver image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_center_crop: typing.Optional[bool] = None
crop_size: typing.Union[typing.Dict[str, int], NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image to crop_size.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Desired output size after applying the center crop.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
PerceiverTextPreprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor
<
source
>
(
config: PerceiverConfig
)
Parameters
config (PerceiverConfig) —
Model configuration.
Text preprocessing for Perceiver Encoder. Can be used to embed inputs and add positional encodings.
The dimensionality of the embeddings is determined by the d_model attribute of the configuration.
PerceiverImagePreprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor
<
source
>
(
config
prep_type = 'conv'
spatial_downsample: int = 4
temporal_downsample: int = 1
position_encoding_type: str = 'fourier'
in_channels: int = 3
out_channels: int = 64
conv_after_patching: bool = False
conv_after_patching_in_channels: int = 54
conv2d_use_batchnorm: bool = True
concat_or_add_pos: str = 'concat'
project_pos_dim: int = -1
**position_encoding_kwargs
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
prep_type (str, optional, defaults to "conv") —
Preprocessing type. Can be “conv1x1”, “conv”, “patches”, “pixels”.
spatial_downsample (int, optional, defaults to 4) —
Spatial downsampling factor.
temporal_downsample (int, optional, defaults to 1) —
Temporal downsampling factor (only relevant in case a time dimension is present).
position_encoding_type (str, optional, defaults to "fourier") —
Position encoding type. Can be “fourier” or “trainable”.
in_channels (int, optional, defaults to 3) —
Number of channels in the input.
out_channels (int, optional, defaults to 64) —
Number of channels in the output.
conv_after_patching (bool, optional, defaults to False) —
Whether to apply a convolutional layer after patching.
conv_after_patching_in_channels (int, optional, defaults to 54) —
Number of channels in the input of the convolutional layer after patching.
conv2d_use_batchnorm (bool, optional, defaults to True) —
Whether to use batch normalization in the convolutional layer.
concat_or_add_pos (str, optional, defaults to "concat") —
How to concatenate the position encoding to the input. Can be “concat” or “add”.
project_pos_dim (int, optional, defaults to -1) —
Dimension of the position encoding to project to. If -1, no projection is applied.
**position_encoding_kwargs (Dict, optional) —
Keyword arguments for the position encoding.
Image preprocessing for Perceiver Encoder.
Note: the out_channels argument refers to the output channels of a convolutional layer, if prep_type is set to
“conv1x1” or “conv”. If one adds absolute position embeddings, one must make sure the num_channels of the
position encoding kwargs are set equal to the out_channels.
PerceiverOneHotPreprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor
<
source
>
(
config: PerceiverConfig
)
Parameters
config (PerceiverConfig) —
Model configuration.
One-hot preprocessor for Perceiver Encoder. Can be used to add a dummy index dimension to the input.
PerceiverAudioPreprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor
<
source
>
(
config
prep_type: str = 'patches'
samples_per_patch: int = 96
position_encoding_type: str = 'fourier'
concat_or_add_pos: str = 'concat'
out_channels = 64
project_pos_dim = -1
**position_encoding_kwargs
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
prep_type (str, optional, defaults to "patches") —
Preprocessor type to use. Only “patches” is supported.
samples_per_patch (int, optional, defaults to 96) —
Number of samples per patch.
position_encoding_type (str, optional, defaults to "fourier") —
Type of position encoding to use. Can be “trainable” or “fourier”.
concat_or_add_pos (str, optional, defaults to "concat") —
How to concatenate the position encoding to the input. Can be “concat” or “add”.
out_channels (int, optional, defaults to 64) —
Number of channels in the output.
project_pos_dim (int, optional, defaults to -1) —
Dimension of the position encoding to project to. If -1, no projection is applied.
**position_encoding_kwargs (Dict, optional) —
Keyword arguments for the position encoding.
Audio preprocessing for Perceiver Encoder.
PerceiverMultimodalPreprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor
<
source
>
(
modalities: typing.Mapping[str, typing.Callable[..., typing.Tuple[torch.Tensor, typing.Optional[torch.Tensor], torch.Tensor]]]
mask_probs: typing.Union[typing.Mapping[str, float], NoneType] = None
min_padding_size: int = 2
)
Parameters
modalities (Mapping[str, PreprocessorType]) —
Dict mapping modality name to preprocessor.
mask_probs (Dict[str, float]) —
Dict mapping modality name to masking probability of that modality.
min_padding_size (int, optional, defaults to 2) —
The minimum padding size for all modalities. The final output will have num_channels equal to the maximum
channels across all modalities plus min_padding_size.
Multimodal preprocessing for Perceiver Encoder.
Inputs for each modality are preprocessed, then padded with trainable position embeddings to have the same number
of channels.
PerceiverProjectionDecoder
class transformers.models.perceiver.modeling_perceiver.PerceiverProjectionDecoder
<
source
>
(
config
)
Parameters
config (PerceiverConfig) —
Model configuration.
Baseline projection decoder (no cross-attention).
PerceiverBasicDecoder
class transformers.models.perceiver.modeling_perceiver.PerceiverBasicDecoder
<
source
>
(
config: PerceiverConfig
output_num_channels: int
position_encoding_type: typing.Optional[str] = 'trainable'
output_index_dims: typing.Optional[int] = None
num_channels: typing.Optional[int] = 128
subsampled_index_dims: typing.Optional[int] = None
qk_channels: typing.Optional[int] = None
v_channels: typing.Optional[int] = None
num_heads: typing.Optional[int] = 1
widening_factor: typing.Optional[int] = 1
use_query_residual: typing.Optional[bool] = False
concat_preprocessed_input: typing.Optional[bool] = False
final_project: typing.Optional[bool] = True
position_encoding_only: typing.Optional[bool] = False
**position_encoding_kwargs
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
output_num_channels (int, optional) —
The number of channels in the output. Will only be used in case final_project is set to True.
position_encoding_type (str, optional, defaults to “trainable”) —
The type of position encoding to use. Can be either “trainable”, “fourier”, or “none”.
output_index_dims (int, optional) —
The number of dimensions of the output queries. Ignored if ‘position_encoding_type’ == ‘none’.
num_channels (int, optional, defaults to 128) —
The number of channels of the decoder queries. Ignored if ‘position_encoding_type’ == ‘none’.
qk_channels (int, optional) —
The number of channels of the queries and keys in the cross-attention layer.
v_channels (int, optional) —
The number of channels of the values in the cross-attention layer.
num_heads (int, optional, defaults to 1) —
The number of attention heads in the cross-attention layer.
widening_factor (int, optional, defaults to 1) —
The widening factor of the cross-attention layer.
use_query_residual (bool, optional, defaults to False) —
Whether to use a residual connection between the query and the output of the cross-attention layer.
concat_preprocessed_input (bool, optional, defaults to False) —
Whether to concatenate the preprocessed input to the query.
final_project (bool, optional, defaults to True) —
Whether to project the output of the cross-attention layer to a target dimension.
position_encoding_only (bool, optional, defaults to False) —
Whether to only use this class to define output queries.
Cross-attention-based decoder. This class can be used to decode the final hidden states of the latents using a
cross-attention operation, in which the latents produce keys and values.
The shape of the output of this class depends on how one defines the output queries (also called decoder queries).
PerceiverClassificationDecoder
class transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder
<
source
>
(
config
**decoder_kwargs
)
Parameters
config (PerceiverConfig) —
Model configuration.
Cross-attention based classification decoder. Light-weight wrapper of PerceiverBasicDecoder for logit output.
Will turn the output of the Perceiver encoder which is of shape (batch_size, num_latents, d_latents) to a tensor of
shape (batch_size, num_labels). The queries are of shape (batch_size, 1, num_labels).
PerceiverOpticalFlowDecoder
class transformers.models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder
<
source
>
(
config
output_image_shape
output_num_channels = 2
rescale_factor = 100.0
**decoder_kwargs
)
Cross-attention based optical flow decoder.
PerceiverBasicVideoAutoencodingDecoder
class transformers.models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder
<
source
>
(
config: PerceiverConfig
output_shape: typing.List[int]
position_encoding_type: str
**decoder_kwargs
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
output_shape (List[int]) —
Shape of the output as (batch_size, num_frames, height, width), excluding the channel dimension.
position_encoding_type (str) —
The type of position encoding to use. Can be either “trainable”, “fourier”, or “none”.
Cross-attention based video-autoencoding decoder. Light-weight wrapper of [PerceiverBasicDecoder] with video
reshaping logic.
PerceiverMultimodalDecoder
class transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder
<
source
>
(
config: PerceiverConfig
modalities: typing.Dict[str, transformers.models.perceiver.modeling_perceiver.PerceiverAbstractDecoder]
num_outputs: int
output_num_channels: int
min_padding_size: typing.Optional[int] = 2
subsampled_index_dims: typing.Union[typing.Dict[str, transformers.models.perceiver.modeling_perceiver.PerceiverAbstractDecoder], NoneType] = None
**decoder_kwargs
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
modalities (Dict[str, PerceiverAbstractDecoder]) —
Dictionary mapping modality name to the decoder of that modality.
num_outputs (int) —
The number of outputs of the decoder.
output_num_channels (int) —
The number of channels in the output.
min_padding_size (int, optional, defaults to 2) —
The minimum padding size for all modalities. The final output will have num_channels equal to the maximum
channels across all modalities plus min_padding_size.
subsampled_index_dims (Dict[str, PerceiverAbstractDecoder], optional) —
Dictionary mapping modality name to the subsampled index dimensions to use for the decoder query of that
modality.
Multimodal decoding by composing uni-modal decoders. The modalities argument of the constructor is a dictionary
mapping modality name to the decoder of that modality. That decoder will be used to construct queries for that
modality. Modality-specific queries are padded with trainable modality-specific parameters, after which they are
concatenated along the time dimension.
Next, there is a shared cross attention operation across all modalities.
PerceiverProjectionPostprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor
<
source
>
(
in_channels: int
out_channels: int
)
Parameters
in_channels (int) —
Number of channels in the input.
out_channels (int) —
Number of channels in the output.
Projection postprocessing for Perceiver. Can be used to project the channels of the decoder output to a lower
dimension.
PerceiverAudioPostprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor
<
source
>
(
config: PerceiverConfig
in_channels: int
postproc_type: str = 'patches'
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
in_channels (int) —
Number of channels in the input.
postproc_type (str, optional, defaults to "patches") —
Postprocessor type to use. Currently, only “patches” is supported.
Audio postprocessing for Perceiver. Can be used to convert the decoder output to audio features.
PerceiverClassificationPostprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor
<
source
>
(
config: PerceiverConfig
in_channels: int
)
Parameters
config ([PerceiverConfig]) —
Model configuration.
in_channels (int) —
Number of channels in the input.
Classification postprocessing for Perceiver. Can be used to convert the decoder output to classification logits.
PerceiverMultimodalPostprocessor
class transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor
<
source
>
(
modalities: typing.Mapping[str, typing.Callable[..., typing.Any]]
input_is_dict: bool = False
)
Parameters
modalities (Mapping[str, PostprocessorType]) —
Dictionary mapping modality name to postprocessor class for that modality.
input_is_dict (bool, optional, defaults to False) —
If True, input is assumed to be dictionary structured, and outputs keep the same dictionary shape. If
False, input is a tensor which is sliced up during postprocessing by modality_sizes.
Multimodal postprocessing for Perceiver. Can be used to combine modality-specific postprocessors into a single
postprocessor.
PerceiverModel
class transformers.PerceiverModel
<
source
>
(
config
decoder = None
input_preprocessor: typing.Callable[..., typing.Tuple[torch.Tensor, typing.Optional[torch.Tensor], torch.Tensor]] = None
output_postprocessor: typing.Callable[..., typing.Any] = None
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
decoder (DecoderType, optional) —
Optional decoder to use to decode the latent representation of the encoder. Examples include
transformers.models.perceiver.modeling_perceiver.PerceiverBasicDecoder,
transformers.models.perceiver.modeling_perceiver.PerceiverClassificationDecoder,
transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder.
input_preprocessor (PreprocessorType, optional) —
Optional input preprocessor to use. Examples include
transformers.models.perceiver.modeling_perceiver.PerceiverImagePreprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverTextPreprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor.
output_postprocessor (PostprocessorType, optional) —
Optional output postprocessor to use. Examples include
transformers.models.perceiver.modeling_perceiver.PerceiverImagePostprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor,
transformers.models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor.
Note that you can define your own decoders, preprocessors and/or postprocessors to fit your use-case. —
The Perceiver: a scalable, fully attentional architecture.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: FloatTensor
attention_mask: typing.Optional[torch.FloatTensor] = None
subsampled_output_points: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverModelOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverModelOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
logits (torch.FloatTensor of shape (batch_size, num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import PerceiverConfig, PerceiverTokenizer, PerceiverImageProcessor, PerceiverModel
from transformers.models.perceiver.modeling_perceiver import (
... PerceiverTextPreprocessor,
... PerceiverImagePreprocessor,
... PerceiverClassificationDecoder,
... )
import torch
import requests
from PIL import Image
# EXAMPLE 1: using the Perceiver to classify texts
# - we define a TextPreprocessor, which can be used to embed tokens
# - we define a ClassificationDecoder, which can be used to decode the
# final hidden states of the latents to classification logits
# using trainable position embeddings
config = PerceiverConfig()
preprocessor = PerceiverTextPreprocessor(config)
decoder = PerceiverClassificationDecoder(
... config,
... num_channels=config.d_latents,
... trainable_position_encoding_kwargs=dict(num_channels=config.d_latents, index_dims=1),
... use_query_residual=True,
... )
model = PerceiverModel(config, input_preprocessor=preprocessor, decoder=decoder)
# you can then do a forward pass as follows:
tokenizer = PerceiverTokenizer()
text = "hello world"
inputs = tokenizer(text, return_tensors="pt").input_ids
with torch.no_grad():
... outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
[1, 2]
# to train, one can train the model using standard cross-entropy:
criterion = torch.nn.CrossEntropyLoss()
labels = torch.tensor([1])
loss = criterion(logits, labels)
# EXAMPLE 2: using the Perceiver to classify images
# - we define an ImagePreprocessor, which can be used to embed images
config = PerceiverConfig(image_size=224)
preprocessor = PerceiverImagePreprocessor(
... config,
... prep_type="conv1x1",
... spatial_downsample=1,
... out_channels=256,
... position_encoding_type="trainable",
... concat_or_add_pos="concat",
... project_pos_dim=256,
... trainable_position_encoding_kwargs=dict(
... num_channels=256,
... index_dims=config.image_size**2,
... ),
... )
model = PerceiverModel(
... config,
... input_preprocessor=preprocessor,
... decoder=PerceiverClassificationDecoder(
... config,
... num_channels=config.d_latents,
... trainable_position_encoding_kwargs=dict(num_channels=config.d_latents, index_dims=1),
... use_query_residual=True,
... ),
... )
# you can then do a forward pass as follows:
image_processor = PerceiverImageProcessor()
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt").pixel_values
with torch.no_grad():
... outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
[1, 2]
# to train, one can train the model using standard cross-entropy:
criterion = torch.nn.CrossEntropyLoss()
labels = torch.tensor([1])
loss = criterion(logits, labels)
PerceiverForMaskedLM
class transformers.PerceiverForMaskedLM
<
source
>
(
config: PerceiverConfig
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for masked language modeling.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
input_ids: typing.Optional[torch.Tensor] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_latents, num_latents). Attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, PerceiverForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepmind/language-perceiver")
model = PerceiverForMaskedLM.from_pretrained("deepmind/language-perceiver")
# training
text = "This is an incomplete sentence where some words are missing."
inputs = tokenizer(text, padding="max_length", return_tensors="pt")
# mask " missing."
inputs["input_ids"][0, 52:61] = tokenizer.mask_token_id
labels = tokenizer(text, padding="max_length", return_tensors="pt").input_ids
outputs = model(**inputs, labels=labels)
loss = outputs.loss
round(loss.item(), 2)
19.87
logits = outputs.logits
list(logits.shape)
[1, 2048, 262]
# inference
text = "This is an incomplete sentence where some words are missing."
encoding = tokenizer(text, padding="max_length", return_tensors="pt")
# mask bytes corresponding to " missing.". Note that the model performs much better if the masked span starts with a space.
encoding["input_ids"][0, 52:61] = tokenizer.mask_token_id
# forward pass
with torch.no_grad():
... outputs = model(**encoding)
logits = outputs.logits
list(logits.shape)
[1, 2048, 262]
masked_tokens_predictions = logits[0, 52:61].argmax(dim=-1).tolist()
tokenizer.decode(masked_tokens_predictions)
' missing.'
PerceiverForSequenceClassification
class transformers.PerceiverForSequenceClassification
<
source
>
(
config
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for text classification.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
input_ids: typing.Optional[torch.Tensor] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, PerceiverForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("deepmind/language-perceiver")
model = PerceiverForSequenceClassification.from_pretrained("deepmind/language-perceiver")
text = "hello world"
inputs = tokenizer(text, return_tensors="pt").input_ids
outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
[1, 2]
PerceiverForImageClassificationLearned
class transformers.PerceiverForImageClassificationLearned
<
source
>
(
config
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for image classification, for tasks such as ImageNet.
This model uses learned position embeddings. In other words, this model is not given any privileged information about
the structure of images. As shown in the paper, this model can achieve a top-1 accuracy of 72.7 on ImageNet.
PerceiverForImageClassificationLearned uses PerceiverImagePreprocessor
(with prep_type="conv1x1") to preprocess the input images, and
PerceiverClassificationDecoder to decode the latent representation of
PerceiverModel into classification logits.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
pixel_values: typing.Optional[torch.Tensor] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForImageClassificationLearned forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, PerceiverForImageClassificationLearned
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("deepmind/vision-perceiver-learned")
model = PerceiverForImageClassificationLearned.from_pretrained("deepmind/vision-perceiver-learned")
inputs = image_processor(images=image, return_tensors="pt").pixel_values
outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
[1, 1000]
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Predicted class: tabby, tabby cat
PerceiverForImageClassificationFourier
class transformers.PerceiverForImageClassificationFourier
<
source
>
(
config
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for image classification, for tasks such as ImageNet.
This model uses fixed 2D Fourier position embeddings. As shown in the paper, this model can achieve a top-1 accuracy of
79.0 on ImageNet, and 84.5 when pre-trained on a large-scale dataset (i.e. JFT).
PerceiverForImageClassificationLearned uses PerceiverImagePreprocessor
(with prep_type="pixels") to preprocess the input images, and
PerceiverClassificationDecoder to decode the latent representation of
PerceiverModel into classification logits.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
pixel_values: typing.Optional[torch.Tensor] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForImageClassificationFourier forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, PerceiverForImageClassificationFourier
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("deepmind/vision-perceiver-fourier")
model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier")
inputs = image_processor(images=image, return_tensors="pt").pixel_values
outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
[1, 1000]
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Predicted class: tabby, tabby cat
PerceiverForImageClassificationConvProcessing
class transformers.PerceiverForImageClassificationConvProcessing
<
source
>
(
config
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for image classification, for tasks such as ImageNet.
This model uses a 2D conv+maxpool preprocessing network. As shown in the paper, this model can achieve a top-1 accuracy
of 82.1 on ImageNet.
PerceiverForImageClassificationLearned uses PerceiverImagePreprocessor
(with prep_type="conv") to preprocess the input images, and
PerceiverClassificationDecoder to decode the latent representation of
PerceiverModel into classification logits.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
pixel_values: typing.Optional[torch.Tensor] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForImageClassificationConvProcessing forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, PerceiverForImageClassificationConvProcessing
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("deepmind/vision-perceiver-conv")
model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv")
inputs = image_processor(images=image, return_tensors="pt").pixel_values
outputs = model(inputs=inputs)
logits = outputs.logits
list(logits.shape)
[1, 1000]
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Predicted class: tabby, tabby cat
PerceiverForOpticalFlow
class transformers.PerceiverForOpticalFlow
<
source
>
(
config
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for optical flow, for tasks such as Sintel and KITTI. PerceiverForOpticalFlow uses
PerceiverImagePreprocessor (with prep_type=“patches”) to preprocess the
input images, and PerceiverOpticalFlowDecoder to decode the latent
representation of PerceiverModel.
As input, one concatenates 2 subsequent frames along the channel dimension and extract a 3 x 3 patch around each pixel
(leading to 3 x 3 x 3 x 2 = 54 values for each pixel). Fixed Fourier position encodings are used to encode the position
of each pixel in the patch. Next, one applies the Perceiver encoder. To decode, one queries the latent representation
using the same encoding used for the input.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the optical flow loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForOpticalFlow forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import PerceiverForOpticalFlow
import torch
model = PerceiverForOpticalFlow.from_pretrained("deepmind/optical-flow-perceiver")
# in the Perceiver IO paper, the authors extract a 3 x 3 patch around each pixel,
# leading to 3 x 3 x 3 = 27 values for each pixel (as each pixel also has 3 color channels)
# patches have shape (batch_size, num_frames, num_channels, height, width)
# the authors train on resolutions of 368 x 496
patches = torch.randn(1, 2, 27, 368, 496)
outputs = model(inputs=patches)
logits = outputs.logits
list(logits.shape)
[1, 368, 496, 2]
PerceiverForMultimodalAutoencoding
class transformers.PerceiverForMultimodalAutoencoding
<
source
>
(
config: PerceiverConfig
)
Parameters
config (PerceiverConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Example use of Perceiver for multimodal (video) autoencoding, for tasks such as Kinetics-700.
PerceiverForMultimodalAutoencoding uses PerceiverMultimodalPreprocessor to
preprocess the 3 modalities: images, audio and class labels. This preprocessor uses modality-specific preprocessors to
preprocess every modality separately, after which they are concatenated. Trainable position embeddings are used to pad
each modality to the same number of channels to make concatenation along the time dimension possible. Next, one applies
the Perceiver encoder.
PerceiverMultimodalDecoder is used to decode the latent representation of
PerceiverModel. This decoder uses each modality-specific decoder to construct queries. The decoder queries are
created based on the inputs after preprocessing. However, autoencoding an entire video in a single forward pass is
computationally infeasible, hence one only uses parts of the decoder queries to do cross-attention with the latent
representation. This is determined by the subsampled indices for each modality, which can be provided as additional
input to the forward pass of PerceiverForMultimodalAutoencoding.
PerceiverMultimodalDecoder also pads the decoder queries of the different
modalities to the same number of channels, in order to concatenate them along the time dimension. Next, cross-attention
is performed with the latent representation of PerceiverModel.
Finally, ~models.perceiver.modeling_perceiver.PerceiverMultiModalPostprocessor is used to turn this tensor into an
actual video. It first splits up the output into the different modalities, and then applies the respective
postprocessor for each modality.
Note that, by masking the classification label during evaluation (i.e. simply providing a tensor of zeros for the
“label” modality), this auto-encoding model becomes a Kinetics 700 video classifier.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
inputs: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
subsampled_output_points: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
Parameters
inputs (torch.FloatTensor) —
Inputs to the perceiver. Can be anything: images, text, audio, video, etc.
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.perceiver.modeling_perceiver.PerceiverClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PerceiverConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
The PerceiverForMultimodalAutoencoding forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import PerceiverForMultimodalAutoencoding
import torch
import numpy as np
# create multimodal inputs
images = torch.randn((1, 16, 3, 224, 224))
audio = torch.randn((1, 30720, 1))
inputs = dict(image=images, audio=audio, label=torch.zeros((images.shape[0], 700)))
model = PerceiverForMultimodalAutoencoding.from_pretrained("deepmind/multimodal-perceiver")
# in the Perceiver IO paper, videos are auto-encoded in chunks
# each chunk subsamples different index dimensions of the image and audio modality decoder queries
nchunks = 128
image_chunk_size = np.prod((16, 224, 224)) // nchunks
audio_chunk_size = audio.shape[1] // model.config.samples_per_patch // nchunks
# process the first chunk
chunk_idx = 0
subsampling = {
... "image": torch.arange(image_chunk_size * chunk_idx, image_chunk_size * (chunk_idx + 1)),
... "audio": torch.arange(audio_chunk_size * chunk_idx, audio_chunk_size * (chunk_idx + 1)),
... "label": None,
... }
outputs = model(inputs=inputs, subsampled_output_points=subsampling)
logits = outputs.logits
list(logits["audio"].shape)
[1, 240]
list(logits["image"].shape)
[1, 6272, 3]
list(logits["label"].shape)
[1, 700]
←OWL-ViT
Pix2Struct→
Perceiver
Overview
Documentation resources
Perceiver specific outputs
PerceiverConfig
PerceiverTokenizer
PerceiverFeatureExtractor
PerceiverImageProcessor
PerceiverTextPreprocessor
PerceiverImagePreprocessor
PerceiverOneHotPreprocessor
PerceiverAudioPreprocessor
PerceiverMultimodalPreprocessor
PerceiverProjectionDecoder
PerceiverBasicDecoder
PerceiverClassificationDecoder
PerceiverOpticalFlowDecoder
PerceiverBasicVideoAutoencodingDecoder
PerceiverMultimodalDecoder
PerceiverProjectionPostprocessor
PerceiverAudioPostprocessor
PerceiverClassificationPostprocessor
PerceiverMultimodalPostprocessor
PerceiverModel
PerceiverForMaskedLM
PerceiverForSequenceClassification
PerceiverForImageClassificationLearned
PerceiverForImageClassificationFourier
PerceiverForImageClassificationConvProcessing
PerceiverForOpticalFlow
PerceiverForMultimodalAutoencoding
|
SAM
Overview
SAM (Segment Anything Model) was proposed in Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
The model can be used to predict segmentation masks of any object of interest given an input image.
The abstract from the paper is the following:
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive — often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.
Tips:
The model predicts binary masks that states the presence or not of the object of interest given an image.
The model predicts much better results if input 2D points and/or input bounding boxes are provided
You can prompt multiple points for the same image, and predict a single mask.
Fine-tuning the model is not supported yet
According to the paper, textual input should be also supported. However, at this time of writing this seems to be not supported according to the official repository.
This model was contributed by ybelkada and ArthurZ.
The original code can be found here.
Below is an example on how to run mask generation given an image and a 2D point:
Copied
import torch
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device)
processor = SamProcessor.from_pretrained("facebook/sam-vit-huge")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device)
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
Resources:
Demo notebook for using the model.
Demo notebook for using the automatic mask generation pipeline.
Demo notebook for inference with MedSAM, a fine-tuned version of SAM on the medical domain.
Demo notebook for fine-tuning the model on custom data.
SamConfig
class transformers.SamConfig
<
source
>
(
vision_config = None
prompt_encoder_config = None
mask_decoder_config = None
initializer_range = 0.02
**kwargs
)
Parameters
vision_config (Union[dict, SamVisionConfig], optional) —
Dictionary of configuration options used to initialize SamVisionConfig.
prompt_encoder_config (Union[dict, SamPromptEncoderConfig], optional) —
Dictionary of configuration options used to initialize SamPromptEncoderConfig.
mask_decoder_config (Union[dict, SamMaskDecoderConfig], optional) —
Dictionary of configuration options used to initialize SamMaskDecoderConfig.
kwargs (optional) —
Dictionary of keyword arguments.
SamConfig is the configuration class to store the configuration of a SamModel. It is used to instantiate a
SAM model according to the specified arguments, defining the vision model, prompt-encoder model and mask decoder
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the
SAM-ViT-H facebook/sam-vit-huge architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import (
... SamVisionConfig,
... SamPromptEncoderConfig,
... SamMaskDecoderConfig,
... SamModel,
... )
# Initializing a SamConfig with `"facebook/sam-vit-huge"` style configuration
configuration = SamConfig()
# Initializing a SamModel (with random weights) from the `"facebook/sam-vit-huge"` style configuration
model = SamModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a SamConfig from a SamVisionConfig, SamPromptEncoderConfig, and SamMaskDecoderConfig
# Initializing SAM vision, SAM Q-Former and language model configurations
vision_config = SamVisionConfig()
prompt_encoder_config = SamPromptEncoderConfig()
mask_decoder_config = SamMaskDecoderConfig()
config = SamConfig(vision_config, prompt_encoder_config, mask_decoder_config)
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
SamVisionConfig
class transformers.SamVisionConfig
<
source
>
(
hidden_size = 768
output_channels = 256
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 1024
patch_size = 16
hidden_act = 'gelu'
layer_norm_eps = 1e-06
attention_dropout = 0.0
initializer_range = 1e-10
qkv_bias = True
mlp_ratio = 4.0
use_abs_pos = True
use_rel_pos = True
window_size = 14
global_attn_indexes = [2, 5, 8, 11]
num_pos_feats = 128
mlp_dim = None
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
output_channels (int, optional, defaults to 256) —
Dimensionality of the output channels in the Patch Encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (int, optional, defaults to 3) —
Number of channels in the input image.
image_size (int, optional, defaults to 1024) —
Expected resolution. Target size of the resized input image.
patch_size (int, optional, defaults to 16) —
Size of the patches to be extracted from the input image.
hidden_act (str, optional, defaults to "gelu") —
The non-linear activation function (function or string)
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 1e-10) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to query, key, value projections.
mlp_ratio (float, optional, defaults to 4.0) —
Ratio of mlp hidden dim to embedding dim.
use_abs_pos (bool, optional, defaults to True) —
Whether to use absolute position embedding.
use_rel_pos (bool, optional, defaults to True) —
Whether to use relative position embedding.
window_size (int, optional, defaults to 14) —
Window size for relative position.
global_attn_indexes (List[int], optional, defaults to [2, 5, 8, 11]) —
The indexes of the global attention layers.
num_pos_feats (int, optional, defaults to 128) —
The dimensionality of the position embedding.
mlp_dim (int, optional, defaults to None) —
The dimensionality of the MLP layer in the Transformer encoder. If None, defaults to mlp_ratio * hidden_size.
This is the configuration class to store the configuration of a SamVisionModel. It is used to instantiate a SAM
vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration
defaults will yield a similar configuration to that of the SAM ViT-h
facebook/sam-vit-huge architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
SamMaskDecoderConfig
class transformers.SamMaskDecoderConfig
<
source
>
(
hidden_size = 256
hidden_act = 'relu'
mlp_dim = 2048
num_hidden_layers = 2
num_attention_heads = 8
attention_downsample_rate = 2
num_multimask_outputs = 3
iou_head_depth = 3
iou_head_hidden_dim = 256
layer_norm_eps = 1e-06
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 256) —
Dimensionality of the hidden states.
hidden_act (str, optional, defaults to "relu") —
The non-linear activation function used inside the SamMaskDecoder module.
mlp_dim (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 2) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
attention_downsample_rate (int, optional, defaults to 2) —
The downsampling rate of the attention layer.
num_multimask_outputs (int, optional, defaults to 3) —
The number of outputs from the SamMaskDecoder module. In the Segment Anything paper, this is set to 3.
iou_head_depth (int, optional, defaults to 3) —
The number of layers in the IoU head module.
iou_head_hidden_dim (int, optional, defaults to 256) —
The dimensionality of the hidden states in the IoU head module.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a SamMaskDecoder. It is used to instantiate a SAM
mask decoder to the specified arguments, defining the model architecture. Instantiating a configuration defaults
will yield a similar configuration to that of the SAM-vit-h
facebook/sam-vit-huge architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
SamPromptEncoderConfig
class transformers.SamPromptEncoderConfig
<
source
>
(
hidden_size = 256
image_size = 1024
patch_size = 16
mask_input_channels = 16
num_point_embeddings = 4
hidden_act = 'gelu'
layer_norm_eps = 1e-06
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 256) —
Dimensionality of the hidden states.
image_size (int, optional, defaults to 1024) —
The expected output resolution of the image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
mask_input_channels (int, optional, defaults to 16) —
The number of channels to be fed to the MaskDecoder module.
num_point_embeddings (int, optional, defaults to 4) —
The number of point embeddings to be used.
hidden_act (str, optional, defaults to "gelu") —
The non-linear activation function in the encoder and pooler.
This is the configuration class to store the configuration of a SamPromptEncoder. The SamPromptEncoder
module is used to encode the input 2D points and bounding boxes. Instantiating a configuration defaults will yield
a similar configuration to that of the SAM-vit-h
facebook/sam-vit-huge architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
SamProcessor
class transformers.SamProcessor
<
source
>
(
image_processor
)
Parameters
image_processor (SamImageProcessor) —
An instance of SamImageProcessor. The image processor is a required input.
Constructs a SAM processor which wraps a SAM image processor and an 2D points & Bounding boxes processor into a
single processor.
SamProcessor offers all the functionalities of SamImageProcessor. See the docstring of
call() for more information.
SamImageProcessor
class transformers.SamImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: bool = True
pad_size: int = None
do_convert_rgb: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (dict, optional, defaults to {"longest_edge" -- 1024}):
Size of the output image after resizing. Resizes the longest edge of the image to match
size["longest_edge"] while maintaining the aspect ratio. Can be overridden by the size parameter in the
preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_rescale (bool, optional, defaults to True) —
Wwhether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Only has an effect if do_rescale is set to True. Can be
overridden by the rescale_factor parameter in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method. Can be overridden by the do_normalize parameter in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method. Can be
overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Whether to pad the image to the specified pad_size. Can be overridden by the do_pad parameter in the
preprocess method.
pad_size (dict, optional, defaults to {"height" -- 1024, "width": 1024}):
Size of the output image after padding. Can be overridden by the pad_size parameter in the preprocess
method.
do_convert_rgb (bool, optional, defaults to True) —
Whether to convert the image to RGB.
Constructs a SAM image processor.
filter_masks
<
source
>
(
masks
iou_scores
original_size
cropped_box_image
pred_iou_thresh = 0.88
stability_score_thresh = 0.95
mask_threshold = 0
stability_score_offset = 1
return_tensors = 'pt'
)
Parameters
masks (Union[torch.Tensor, tf.Tensor]) —
Input masks.
iou_scores (Union[torch.Tensor, tf.Tensor]) —
List of IoU scores.
original_size (Tuple[int,int]) —
Size of the orginal image.
cropped_box_image (np.array) —
The cropped image.
pred_iou_thresh (float, optional, defaults to 0.88) —
The threshold for the iou scores.
stability_score_thresh (float, optional, defaults to 0.95) —
The threshold for the stability score.
mask_threshold (float, optional, defaults to 0) —
The threshold for the predicted masks.
stability_score_offset (float, optional, defaults to 1) —
The offset for the stability score used in the _compute_stability_score method.
return_tensors (str, optional, defaults to pt) —
If pt, returns torch.Tensor. If tf, returns tf.Tensor.
Filters the predicted masks by selecting only the ones that meets several criteria. The first criterion being
that the iou scores needs to be greater than pred_iou_thresh. The second criterion is that the stability
score needs to be greater than stability_score_thresh. The method also converts the predicted masks to
bounding boxes and pad the predicted masks if necessary.
generate_crop_boxes
<
source
>
(
image
target_size
crop_n_layers: int = 0
overlap_ratio: float = 0.3413333333333333
points_per_crop: typing.Optional[int] = 32
crop_n_points_downscale_factor: typing.Optional[typing.List[int]] = 1
device: typing.Optional[ForwardRef('torch.device')] = None
return_tensors: str = 'pt'
)
Parameters
image (np.array) —
Input original image
target_size (int) —
Target size of the resized image
crop_n_layers (int, optional, defaults to 0) —
If >0, mask prediction will be run again on crops of the image. Sets the number of layers to run, where
each layer has 2**i_layer number of image crops.
overlap_ratio (float, optional, defaults to 512/1500) —
Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of
the image length. Later layers with more crops scale down this overlap.
points_per_crop (int, optional, defaults to 32) —
Number of points to sample from each crop.
crop_n_points_downscale_factor (List[int], optional, defaults to 1) —
The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
device (torch.device, optional, defaults to None) —
Device to use for the computation. If None, cpu will be used.
return_tensors (str, optional, defaults to pt) —
If pt, returns torch.Tensor. If tf, returns tf.Tensor.
Generates a list of crop boxes of different sizes. Each layer has (2i)2 boxes for the ith layer.
normalize
<
source
>
(
image: ndarray
mean: typing.Union[float, typing.List[float]]
std: typing.Union[float, typing.List[float]]
data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None
**kwargs
)
Parameters
image (np.ndarray) —
Image to normalize.
mean (float or List[float]) —
Image mean.
std (float or List[float]) —
Image standard deviation.
data_format (str or ChannelDimension, optional) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Normalize an image. image = (image - image_mean) / image_std.
pad_image
<
source
>
(
image: ndarray
pad_size: typing.Dict[str, int]
data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None
**kwargs
)
Parameters
image (np.ndarray) —
Image to pad.
pad_size (Dict[str, int]) —
Size of the output image after padding.
data_format (str or ChannelDimension, optional) —
The data format of the image. Can be either “channels_first” or “channels_last”. If None, the
data_format of the image will be used.
Pad an image to (pad_size["height"], pad_size["width"]) with zeros to the right and bottom.
post_process_for_mask_generation
<
source
>
(
all_masks
all_scores
all_boxes
crops_nms_thresh
return_tensors = 'pt'
)
Parameters
all_masks (Union[List[torch.Tensor], List[tf.Tensor]]) —
List of all predicted segmentation masks
all_scores (Union[List[torch.Tensor], List[tf.Tensor]]) —
List of all predicted iou scores
all_boxes (Union[List[torch.Tensor], List[tf.Tensor]]) —
List of all bounding boxes of the predicted masks
crops_nms_thresh (float) —
Threshold for NMS (Non Maximum Suppression) algorithm.
return_tensors (str, optional, defaults to pt) —
If pt, returns torch.Tensor. If tf, returns tf.Tensor.
Post processes mask that are generated by calling the Non Maximum Suppression algorithm on the predicted masks.
post_process_masks
<
source
>
(
masks
original_sizes
reshaped_input_sizes
mask_threshold = 0.0
binarize = True
pad_size = None
return_tensors = 'pt'
)
→
(Union[torch.Tensor, tf.Tensor])
Parameters
masks (Union[List[torch.Tensor], List[np.ndarray], List[tf.Tensor]]) —
Batched masks from the mask_decoder in (batch_size, num_channels, height, width) format.
original_sizes (Union[torch.Tensor, tf.Tensor, List[Tuple[int,int]]]) —
The original sizes of each image before it was resized to the model’s expected input shape, in (height,
width) format.
reshaped_input_sizes (Union[torch.Tensor, tf.Tensor, List[Tuple[int,int]]]) —
The size of each image as it is fed to the model, in (height, width) format. Used to remove padding.
mask_threshold (float, optional, defaults to 0.0) —
The threshold to use for binarizing the masks.
binarize (bool, optional, defaults to True) —
Whether to binarize the masks.
pad_size (int, optional, defaults to self.pad_size) —
The target size the images were padded to before being passed to the model. If None, the target size is
assumed to be the processor’s pad_size.
return_tensors (str, optional, defaults to "pt") —
If "pt", return PyTorch tensors. If "tf", return TensorFlow tensors.
Returns
(Union[torch.Tensor, tf.Tensor])
Batched masks in batch_size, num_channels, height, width) format, where
(height, width) is given by original_size.
Remove padding and upscale masks to the original image size.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: typing.Optional[ForwardRef('PILImageResampling')] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Union[int, float, NoneType] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
pad_size: typing.Union[typing.Dict[str, int], NoneType] = None
do_convert_rgb: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Controls the size of the image after resize. The longest edge of the image is resized to
size["longest_edge"] whilst preserving the aspect ratio.
resample (PILImageResampling, optional, defaults to self.resample) —
PILImageResampling filter to use when resizing the image e.g. PILImageResampling.BILINEAR.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image pixel values by rescaling factor.
rescale_factor (int or float, optional, defaults to self.rescale_factor) —
Rescale factor to apply to the image pixel values.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to normalize the image by if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to normalize the image by if do_normalize is set to True.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image.
pad_size (Dict[str, int], optional, defaults to self.pad_size) —
Controls the size of the padding applied to the image. The image is padded to pad_size["height"] and
pad_size["width"] if do_pad is set to True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
rescale
<
source
>
(
image: ndarray
scale: typing.Union[int, float]
data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None
**kwargs
)
Parameters
image (np.ndarray) —
Image to rescale.
scale (int or float) —
Scale to apply to the image.
data_format (str or ChannelDimension, optional) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Rescale an image by a scale factor. image = image * scale.
resize
<
source
>
(
image: ndarray
size: typing.Dict[str, int]
resample: Resampling = <Resampling.BICUBIC: 3>
data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None
**kwargs
)
→
np.ndarray
Parameters
image (np.ndarray) —
Image to resize.
size (Dict[str, int]) —
Dictionary in the format {"longest_edge": int} specifying the size of the output image. The longest
edge of the image will be resized to the specified size, while the other edge will be resized to
maintain the aspect ratio.
resample —
PILImageResampling filter to use when resizing the image e.g. PILImageResampling.BILINEAR.
data_format (ChannelDimension or str, optional) —
The channel dimension format for the output image. If unset, the channel dimension format of the input
image is used. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Returns
np.ndarray
The resized image.
Resize an image to (size["height"], size["width"]).
SamModel
class transformers.SamModel
<
source
>
(
config
)
Parameters
config (SamConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Segment Anything Model (SAM) for generating segmentation masks, given an input image and optional 2D location and bounding boxes.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
input_points: typing.Optional[torch.FloatTensor] = None
input_labels: typing.Optional[torch.LongTensor] = None
input_boxes: typing.Optional[torch.FloatTensor] = None
input_masks: typing.Optional[torch.LongTensor] = None
image_embeddings: typing.Optional[torch.FloatTensor] = None
multimask_output: bool = True
attention_similarity: typing.Optional[torch.FloatTensor] = None
target_embedding: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict = None
**kwargs
)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using SamProcessor. See SamProcessor.__call__() for
details.
input_points (torch.FloatTensor of shape (batch_size, num_points, 2)) —
Input 2D spatial points, this is used by the prompt encoder to encode the prompt. Generally yields to much
better results. The points can be obtained by passing a list of list of list to the processor that will
create corresponding torch tensors of dimension 4. The first dimension is the image batch size, the
second dimension is the point batch size (i.e. how many segmentation masks do we want the model to predict
per input point), the third dimension is the number of points per segmentation mask (it is possible to pass
multiple points for a single mask), and the last dimension is the x (vertical) and y (horizontal)
coordinates of the point. If a different number of points is passed either for each image, or for each
mask, the processor will create “PAD” points that will correspond to the (0, 0) coordinate, and the
computation of the embedding will be skipped for these points using the labels.
input_labels (torch.LongTensor of shape (batch_size, point_batch_size, num_points)) —
Input labels for the points, this is used by the prompt encoder to encode the prompt. According to the
official implementation, there are 3 types of labels
1: the point is a point that contains the object of interest
0: the point is a point that does not contain the object of interest
-1: the point corresponds to the background
We added the label:
-10: the point is a padding point, thus should be ignored by the prompt encoder
The padding labels should be automatically done by the processor.
input_boxes (torch.FloatTensor of shape (batch_size, num_boxes, 4)) —
Input boxes for the points, this is used by the prompt encoder to encode the prompt. Generally yields to
much better generated masks. The boxes can be obtained by passing a list of list of list to the processor,
that will generate a torch tensor, with each dimension corresponding respectively to the image batch
size, the number of boxes per image and the coordinates of the top left and botton right point of the box.
In the order (x1, y1, x2, y2):
x1: the x coordinate of the top left point of the input box
y1: the y coordinate of the top left point of the input box
x2: the x coordinate of the bottom right point of the input box
y2: the y coordinate of the bottom right point of the input box
input_masks (torch.FloatTensor of shape (batch_size, image_size, image_size)) —
SAM model also accepts segmentation masks as input. The mask will be embedded by the prompt encoder to
generate a corresponding embedding, that will be fed later on to the mask decoder. These masks needs to be
manually fed by the user, and they need to be of shape (batch_size, image_size, image_size).
image_embeddings (torch.FloatTensor of shape (batch_size, output_channels, window_size, window_size)) —
Image embeddings, this is used by the mask decder to generate masks and iou scores. For more memory
efficient computation, users can first retrieve the image embeddings using the get_image_embeddings
method, and then feed them to the forward method instead of feeding the pixel_values.
multimask_output (bool, optional) —
In the original implementation and paper, the model always outputs 3 masks per image (or per point / per
bounding box if relevant). However, it is possible to just output a single mask, that corresponds to the
“best” mask, by specifying multimask_output=False.
attention_similarity (torch.FloatTensor, optional) —
Attention similarity tensor, to be provided to the mask decoder for target-guided attention in case the
model is used for personalization as introduced in PerSAM.
target_embedding (torch.FloatTensor, optional) —
Embedding of the target concept, to be provided to the mask decoder for target-semantic prompting in case
the model is used for personalization as introduced in PerSAM.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Example —
The SamModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TFSamModel
class transformers.TFSamModel
<
source
>
(
*args
**kwargs
)
Parameters
config (SamConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Segment Anything Model (SAM) for generating segmentation masks, given an input image and optional 2D location and bounding boxes.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a TensorFlow tf.keras.Model
subclass. Use it as a regular TensorFlow Model and refer to the TensorFlow documentation for all matter related to
general usage and behavior.
call
<
source
>
(
pixel_values: TFModelInputType | None = None
input_points: tf.Tensor | None = None
input_labels: tf.Tensor | None = None
input_boxes: tf.Tensor | None = None
input_masks: tf.Tensor | None = None
image_embeddings: tf.Tensor | None = None
multimask_output: bool = True
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict = None
training = False
**kwargs
)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using SamProcessor. See SamProcessor.__call__() for
details.
input_points (tf.Tensor of shape (batch_size, num_points, 2)) —
Input 2D spatial points, this is used by the prompt encoder to encode the prompt. Generally yields to much
better results. The points can be obtained by passing a list of list of list to the processor that will
create corresponding tf tensors of dimension 4. The first dimension is the image batch size, the second
dimension is the point batch size (i.e. how many segmentation masks do we want the model to predict per
input point), the third dimension is the number of points per segmentation mask (it is possible to pass
multiple points for a single mask), and the last dimension is the x (vertical) and y (horizontal)
coordinates of the point. If a different number of points is passed either for each image, or for each
mask, the processor will create “PAD” points that will correspond to the (0, 0) coordinate, and the
computation of the embedding will be skipped for these points using the labels.
input_labels (tf.Tensor of shape (batch_size, point_batch_size, num_points)) —
Input labels for the points, this is used by the prompt encoder to encode the prompt. According to the
official implementation, there are 3 types of labels
1: the point is a point that contains the object of interest
0: the point is a point that does not contain the object of interest
-1: the point corresponds to the background
We added the label:
-10: the point is a padding point, thus should be ignored by the prompt encoder
The padding labels should be automatically done by the processor.
input_boxes (tf.Tensor of shape (batch_size, num_boxes, 4)) —
Input boxes for the points, this is used by the prompt encoder to encode the prompt. Generally yields to
much better generated masks. The boxes can be obtained by passing a list of list of list to the processor,
that will generate a tf tensor, with each dimension corresponding respectively to the image batch size,
the number of boxes per image and the coordinates of the top left and botton right point of the box. In the
order (x1, y1, x2, y2):
x1: the x coordinate of the top left point of the input box
y1: the y coordinate of the top left point of the input box
x2: the x coordinate of the bottom right point of the input box
y2: the y coordinate of the bottom right point of the input box
input_masks (tf.Tensor of shape (batch_size, image_size, image_size)) —
SAM model also accepts segmentation masks as input. The mask will be embedded by the prompt encoder to
generate a corresponding embedding, that will be fed later on to the mask decoder. These masks needs to be
manually fed by the user, and they need to be of shape (batch_size, image_size, image_size).
image_embeddings (tf.Tensor of shape (batch_size, output_channels, window_size, window_size)) —
Image embeddings, this is used by the mask decder to generate masks and iou scores. For more memory
efficient computation, users can first retrieve the image embeddings using the get_image_embeddings
method, and then feed them to the call method instead of feeding the pixel_values.
multimask_output (bool, optional) —
In the original implementation and paper, the model always outputs 3 masks per image (or per point / per
bounding box if relevant). However, it is possible to just output a single mask, that corresponds to the
“best” mask, by specifying multimask_output=False.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The TFSamModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←Pix2Struct
Speech Encoder Decoder Models→
SAM
Overview
SamConfig
SamVisionConfig
SamMaskDecoderConfig
SamPromptEncoderConfig
SamProcessor
SamImageProcessor
SamModel
TFSamModel
|
MobileNet V2
Overview
The MobileNet model was proposed in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
The abstract from the paper is the following:
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.
The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.
Tips:
The checkpoints are named mobilenet_v2_depth_size, for example mobilenet_v2_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as “alpha” or the width multiplier) and 224 is the resolution of the input images the model was trained on.
Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
One can use MobileNetV2ImageProcessor to prepare images for the model.
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0).
The segmentation model uses a DeepLabV3+ head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a MobileNetV2Config with tf_padding = False.
Unsupported features:
The MobileNetV2Model outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this.
The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional “FakeQuantization” operations to unquantize the weights.
It’s common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell MobileNetV2Model up to which layer it should run.
This model was contributed by matthijs. The original code and weights can be found here for the main model and here for DeepLabV3+.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV2.
Image Classification
MobileNetV2ForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Semantic segmentation
Semantic segmentation task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileNetV2Config
class transformers.MobileNetV2Config
<
source
>
(
num_channels = 3
image_size = 224
depth_multiplier = 1.0
depth_divisible_by = 8
min_depth = 8
expand_ratio = 6
output_stride = 32
first_layer_is_expansion = True
finegrained_output = True
hidden_act = 'relu6'
tf_padding = True
classifier_dropout_prob = 0.8
initializer_range = 0.02
layer_norm_eps = 0.001
semantic_loss_ignore_index = 255
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
depth_multiplier (float, optional, defaults to 1.0) —
Shrinks or expands the number of channels in each layer. Default is 1.0, which starts the network with 32
channels. This is sometimes also called “alpha” or “width multiplier”.
depth_divisible_by (int, optional, defaults to 8) —
The number of channels in each layer will always be a multiple of this number.
min_depth (int, optional, defaults to 8) —
All layers will have at least this many channels.
expand_ratio (float, optional, defaults to 6.0) —
The number of output channels of the first layer in each block is input channels times expansion ratio.
output_stride (int, optional, defaults to 32) —
The ratio between the spatial resolution of the input and output feature maps. By default the model reduces
the input dimensions by a factor of 32. If output_stride is 8 or 16, the model uses dilated convolutions
on the depthwise layers instead of regular convolutions, so that the feature maps never become more than 8x
or 16x smaller than the input image.
first_layer_is_expansion (bool, optional, defaults to True) —
True if the very first convolution layer is also the expansion layer for the first expansion block.
finegrained_output (bool, optional, defaults to True) —
If true, the number of output channels in the final convolution layer will stay large (1280) even if
depth_multiplier is less than 1.
hidden_act (str or function, optional, defaults to "relu6") —
The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
tf_padding (bool, optional, defaults to True) —
Whether to use TensorFlow padding rules on the convolution layers.
classifier_dropout_prob (float, optional, defaults to 0.999) —
The dropout ratio for attached classifiers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 0.001) —
The epsilon used by the layer normalization layers.
semantic_loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a MobileNetV2Model. It is used to instantiate a
MobileNetV2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileNetV2
google/mobilenet_v2_1.0_224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MobileNetV2Config, MobileNetV2Model
# Initializing a "mobilenet_v2_1.0_224" style configuration
configuration = MobileNetV2Config()
# Initializing a model from the "mobilenet_v2_1.0_224" style configuration
model = MobileNetV2Model(configuration)
# Accessing the model configuration
configuration = model.config
MobileNetV2FeatureExtractor
class transformers.MobileNetV2FeatureExtractor
<
source
>
(
*args
**kwargs
)
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (PILImageResampling filter, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use if do_normalize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
List[torch.Tensor]
Parameters
outputs (MobileNetV2ForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple], optional) —
A list of length batch_size, where each item is a Tuple[int, int] corresponding to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MobileNetV2ForSemanticSegmentation into semantic segmentation maps. Only supports
PyTorch.
MobileNetV2ImageProcessor
class transformers.MobileNetV2ImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 256}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image
is padded with 0’s and then center cropped. Can be overridden by the do_center_crop parameter in the
preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Desired output size when applying center-cropping. Only has an effect if do_center_crop is set to True.
Can be overridden by the crop_size parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a MobileNetV2 image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (PILImageResampling filter, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use if do_normalize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
List[torch.Tensor]
Parameters
outputs (MobileNetV2ForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple], optional) —
A list of length batch_size, where each item is a Tuple[int, int] corresponding to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MobileNetV2ForSemanticSegmentation into semantic segmentation maps. Only supports
PyTorch.
MobileNetV2Model
class transformers.MobileNetV2Model
<
source
>
(
config: MobileNetV2Config
add_pooling_layer: bool = True
)
Parameters
config (MobileNetV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileNetV2 model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileNetV2ImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileNetV2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The MobileNetV2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileNetV2Model
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_1.0_224")
model = MobileNetV2Model.from_pretrained("google/mobilenet_v2_1.0_224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1280, 7, 7]
MobileNetV2ForImageClassification
class transformers.MobileNetV2ForImageClassification
<
source
>
(
config: MobileNetV2Config
)
Parameters
config (MobileNetV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileNetV2 model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileNetV2ImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss). If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileNetV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The MobileNetV2ForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileNetV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_1.0_224")
model = MobileNetV2ForImageClassification.from_pretrained("google/mobilenet_v2_1.0_224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
MobileNetV2ForSemanticSegmentation
class transformers.MobileNetV2ForSemanticSegmentation
<
source
>
(
config: MobileNetV2Config
)
Parameters
config (MobileNetV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileNetV2 model with a semantic segmentation head on top, e.g. for Pascal VOC.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileNetV2ImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileNetV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileNetV2ForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, MobileNetV2ForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
model = MobileNetV2ForSemanticSegmentation.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
←MobileNetV1
MobileViT→
MobileNet V2
Overview
Resources
MobileNetV2Config
MobileNetV2FeatureExtractor
MobileNetV2ImageProcessor
MobileNetV2Model
MobileNetV2ForImageClassification
MobileNetV2ForSemanticSegmentation
|
XLM-V
Overview
XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R).
It was introduced in the XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
From the abstract of the XLM-V paper:
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.
As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged.
This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R.
In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by
de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity
to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically
more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,
a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we
tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and
named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
Tips:
XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from fairseq
library had to be converted.
The XLMTokenizer implementation is used to load the vocab and performs tokenization.
A XLM-V (base size) model is available under the facebook/xlm-v-base identifier.
This model was contributed by stefan-it, including detailed experiments with XLM-V on downstream tasks.
The experiments repository can be found here.
←XLM-RoBERTa-XL
XLNet→
XLM-V
Overview
|
BLIP
Overview
The BLIP model was proposed in BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
BLIP is a model that is able to perform various multi-modal tasks including
Visual Question Answering
Image-Text retrieval (Image-text matching)
Image Captioning
The abstract from the paper is the following:
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks.
However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.
This model was contributed by ybelkada.
The original code can be found here.
Resources
Jupyter notebook on how to fine-tune BLIP for image captioning on a custom dataset
BlipConfig
class transformers.BlipConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 512
logit_scale_init_value = 2.6592
image_text_hidden_size = 256
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize BlipTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize BlipVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original BLIP implementation.
image_text_hidden_size (int, optional, defaults to 768) —
Dimentionality of the hidden state of the image-text fusion layer.
kwargs (optional) —
Dictionary of keyword arguments.
BlipConfig is the configuration class to store the configuration of a BlipModel. It is used to instantiate
a BLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating
a configuration with the defaults will yield a similar configuration to that of the BLIP-base
Salesforce/blip-vqa-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BlipConfig, BlipModel
# Initializing a BlipConfig with Salesforce/blip-vqa-base style configuration
configuration = BlipConfig()
# Initializing a BlipPModel (with random weights) from the Salesforce/blip-vqa-base style configuration
model = BlipModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a BlipConfig from a BlipTextConfig and a BlipVisionConfig
# Initializing a BLIPText and BLIPVision configuration
config_text = BlipTextConfig()
config_vision = BlipVisionConfig()
config = BlipConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: BlipTextConfig
vision_config: BlipVisionConfig
**kwargs
)
→
BlipConfig
Returns
BlipConfig
An instance of a configuration object
Instantiate a BlipConfig (or a derived class) from blip text model configuration and blip vision model
configuration.
BlipTextConfig
class transformers.BlipTextConfig
<
source
>
(
vocab_size = 30524
hidden_size = 768
encoder_hidden_size = 768
intermediate_size = 3072
projection_dim = 768
num_hidden_layers = 12
num_attention_heads = 8
max_position_embeddings = 512
hidden_act = 'gelu'
layer_norm_eps = 1e-12
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
bos_token_id = 30522
eos_token_id = 2
pad_token_id = 0
sep_token_id = 102
is_decoder = True
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Blip text model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling BlipModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
encoder_hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers from the vision model.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 77) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
bos_token_id (int, optional, defaults to 30522) —
The id of the beginning-of-sequence token.
eos_token_id (int, optional, defaults to 2) —
The id of the end-of-sequence token.
pad_token_id (int, optional, defaults to 0) —
The id of the padding token.
sep_token_id (int, optional, defaults to 102) —
The id of the separator token.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a BlipTextModel. It is used to instantiate a BLIP
text model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the BlipText used by the base
architectures.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BlipTextConfig, BlipTextModel
# Initializing a BlipTextConfig with Salesforce/blip-vqa-base style configuration
configuration = BlipTextConfig()
# Initializing a BlipTextModel (with random weights) from the Salesforce/blip-vqa-base style configuration
model = BlipTextModel(configuration)
# Accessing the model configuration
configuration = model.config
BlipVisionConfig
class transformers.BlipVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
projection_dim = 512
num_hidden_layers = 12
num_attention_heads = 12
image_size = 384
patch_size = 16
hidden_act = 'gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 1e-10
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
This is the configuration class to store the configuration of a BlipVisionModel. It is used to instantiate a
BLIP vision model according to the specified arguments, defining the model architecture. Instantiating a
configuration defaults will yield a similar configuration to that of the Blip-base
Salesforce/blip-vqa-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BlipVisionConfig, BlipVisionModel
# Initializing a BlipVisionConfig with Salesforce/blip-vqa-base style configuration
configuration = BlipVisionConfig()
# Initializing a BlipVisionModel (with random weights) from the Salesforce/blip-vqa-base style configuration
model = BlipVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
BlipProcessor
class transformers.BlipProcessor
<
source
>
(
image_processor
tokenizer
)
Parameters
image_processor (BlipImageProcessor) —
An instance of BlipImageProcessor. The image processor is a required input.
tokenizer (BertTokenizerFast) —
An instance of [‘BertTokenizerFast`]. The tokenizer is a required input.
Constructs a BLIP processor which wraps a BERT tokenizer and BLIP image processor into a single processor.
BlipProcessor offers all the functionalities of BlipImageProcessor and BertTokenizerFast. See the
docstring of __call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
BlipImageProcessor
class transformers.BlipImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (dict, optional, defaults to {"height" -- 384, "width": 384}):
Size of the output image after resizing. Can be overridden by the size parameter in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. Can be
overridden by the resample parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Wwhether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Only has an effect if do_rescale is set to True. Can be
overridden by the rescale_factor parameter in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method. Can be overridden by the do_normalize parameter in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method. Can be
overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Can be overridden by the image_std parameter in the preprocess method.
do_convert_rgb (bool, optional, defaults to True) —
Whether to convert the image to RGB.
Constructs a BLIP image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
do_convert_rgb: bool = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Controls the size of the image after resize. The shortest edge of the image is resized to
size["shortest_edge"] whilst preserving the aspect ratio. If the longest edge of this resized image
is > int(size["shortest_edge"] * (1333 / 800)), then the image is resized again to make the longest
edge equal to int(size["shortest_edge"] * (1333 / 800)).
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to normalize the image by if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to normalize the image by if do_normalize is set to True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
BlipModel
class transformers.BlipModel
<
source
>
(
config: BlipConfig
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.blip.modeling_blip.BlipOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_blip.BlipOutput or tuple(torch.FloatTensor)
A transformers.models.blip.modeling_blip.BlipOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of BlipTextModel.
image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of BlipVisionModel.
text_model_output(BaseModelOutputWithPooling):
The output of the BlipTextModel.
vision_model_output(BaseModelOutputWithPooling):
The output of the BlipVisionModel.
The BlipModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, BlipModel
model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of BlipTextModel.
The BlipModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, BlipModel
model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of BlipVisionModel.
The BlipModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, BlipModel
model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
BlipTextModel
class transformers.BlipTextModel
<
source
>
(
config
add_pooling_layer = True
)
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. argument and is_decoder set to True; an
encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
is_decoder: typing.Optional[bool] = False
)
encoder_hidden_states (torch.FloatTensor, optional):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor, optional):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional):
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional):
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
BlipVisionModel
class transformers.BlipVisionModel
<
source
>
(
config: BlipVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BlipVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
BlipForConditionalGeneration
class transformers.BlipForConditionalGeneration
<
source
>
(
config: BlipConfig
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP Model for image captioning. The model consists of a vision encoder and a text decoder. One can optionally pass
input_ids to the model, which serve as a text prompt, to make the text decoder continue the prompt. Otherwise,
the decoder starts generating text from the [BOS] (beginning-of-sequence) token. will start generating the caption
from the text input. If no text input is provided, the decoder will start with the [BOS] token only.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: FloatTensor
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
A transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Languge modeling loss from the text decoder.
decoder_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size), optional) — Prediction scores of the language modeling head of the text decoder model.
image_embeds (torch.FloatTensor of shape (batch_size, output_dim), optional) — The image embeddings obtained after applying the Vision Transformer model to the input image.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BlipForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForConditionalGeneration
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "A picture of"
inputs = processor(images=image, text=text, return_tensors="pt")
outputs = model(**inputs)
BlipForImageTextRetrieval
class transformers.BlipForImageTextRetrieval
<
source
>
(
config: BlipConfig
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP Model with a vision and text projector, and a classification head on top. The model is used in the context of
image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to
the image.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor
pixel_values: FloatTensor
use_itm_head: typing.Optional[bool] = True
attention_mask: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor)
A transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder.
image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BlipForImageTextRetrieval forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForImageTextRetrieval
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco")
processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "an image of a cat"
inputs = processor(images=image, text=text, return_tensors="pt")
outputs = model(**inputs)
BlipForQuestionAnswering
class transformers.BlipForQuestionAnswering
<
source
>
(
config: BlipConfig
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text
decoder. The vision encoder will encode the input image, the text encoder will encode the input question together
with the encoding of the image, and the text decoder will output the answer to the question.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor
pixel_values: FloatTensor
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor)
A transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder.
image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BlipForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, BlipForQuestionAnswering
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# training
text = "How many cats are in the picture?"
label = "2"
inputs = processor(images=image, text=text, return_tensors="pt")
labels = processor(text=label, return_tensors="pt").input_ids
inputs["labels"] = labels
outputs = model(**inputs)
loss = outputs.loss
loss.backward()
# inference
text = "How many cats are in the picture?"
inputs = processor(images=image, text=text, return_tensors="pt")
outputs = model.generate(**inputs)
print(processor.decode(outputs[0], skip_special_tokens=True))
2
TFBlipModel
class transformers.TFBlipModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_ids: tf.Tensor | None = None
pixel_values: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
return_loss: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = None
)
→
transformers.models.blip.modeling_tf_blip.TFBlipOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_tf_blip.TFBlipOutput or tuple(tf.Tensor)
A transformers.models.blip.modeling_tf_blip.TFBlipOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.blip.configuration_blip.BlipConfig'>) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(tf.Tensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(tf.Tensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(tf.Tensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of BlipTextModel.
image_embeds(tf.Tensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of BlipVisionModel.
text_model_output(BaseModelOutputWithPooling):
The output of the BlipTextModel.
vision_model_output(BaseModelOutputWithPooling):
The output of the BlipVisionModel.
The TFBlipModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFBlipModel
model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = tf.nn.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
return_dict: Optional[bool] = None
)
→
text_features (tf.Tensor of shape (batch_size, output_dim)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (tf.Tensor of shape (batch_size, output_dim)
The text embeddings obtained by applying
the projection layer to the pooled output of TFBlipTextModel.
The TFBlipModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, TFBlipModel
model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: tf.Tensor | None = None
return_dict: Optional[bool] = None
)
→
image_features (tf.Tensor of shape (batch_size, output_dim)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (tf.Tensor of shape (batch_size, output_dim)
The image embeddings obtained by applying
the projection layer to the pooled output of TFBlipVisionModel.
The TFBlipModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFBlipModel
model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="tf")
image_features = model.get_image_features(**inputs)
TFBlipTextModel
class transformers.TFBlipTextModel
<
source
>
(
*args
**kwargs
)
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. argument and is_decoder set to True; an
encoder_hidden_states is then expected as an input to the forward pass.
call
<
source
>
(
input_ids = None
attention_mask = None
position_ids = None
head_mask = None
inputs_embeds = None
encoder_embeds = None
encoder_hidden_states = None
encoder_attention_mask = None
past_key_values = None
use_cache = None
output_attentions = None
output_hidden_states = None
return_dict = None
is_decoder = False
training = None
)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (tf.Tensor, optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor, optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(tf.Tensor)), optional) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
The TFBlipTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TFBlipVisionModel
class transformers.TFBlipVisionModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
pixel_values: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = None
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBlipVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TFBlipForConditionalGeneration
class transformers.TFBlipForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP Model for image captioning. The model consists of a vision encoder and a text decoder. One can optionally pass
input_ids to the model, which serve as a text prompt, to make the text decoder continue the prompt. Otherwise,
the decoder starts generating text from the [BOS] (beginning-of-sequence) token. will start generating the caption
from the text input. If no text input is provided, the decoder will start with the [BOS] token only.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
labels: tf.Tensor | None = None
return_dict: Optional[bool] = None
training: Optional[bool] = None
)
→
transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGenerationModelOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGenerationModelOutput or tuple(tf.Tensor)
A transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGenerationModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.blip.configuration_blip.BlipConfig'>) and inputs.
loss (tf.Tensor, optional, returned when labels is provided, tf.Tensor of shape (1,)) — Languge modeling loss from the text decoder.
decoder_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size), optional) — Prediction scores of the language modeling head of the text decoder model.
image_embeds (tf.Tensor of shape (batch_size, output_dim), optional) — The image embeddings obtained after applying the Vision Transformer model to the input image.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.`
The TFBlipForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFBlipForConditionalGeneration
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = TFBlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "A picture of"
inputs = processor(images=image, text=text, return_tensors="tf")
outputs = model(**inputs)
TFBlipForImageTextRetrieval
class transformers.TFBlipForImageTextRetrieval
<
source
>
(
*args
**kwargs
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP Model with a vision and text projector, and a classification head on top. The model is used in the context of
image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to
the image.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
input_ids: tf.Tensor
pixel_values: tf.Tensor | None = None
use_itm_head: Optional[bool] = True
attention_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = None
)
→
transformers.models.blip.modeling_tf_blip.TFBlipImageTextMatchingModelOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_tf_blip.TFBlipImageTextMatchingModelOutput or tuple(tf.Tensor)
A transformers.models.blip.modeling_tf_blip.TFBlipImageTextMatchingModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
itm_score (tf.Tensor) — The image-text similarity scores.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder.
image_embeds (tf.Tensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
vision_pooler_output (tf.Tensor of shape (batch_size, hidden_size), optional) — Last layer hidden-state of the vision of the vision-only branch of the model.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
question_embeds (tf.Tensor) — The question embeddings obtained by the text projection layer.
The TFBlipForImageTextRetrieval forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFBlipForImageTextRetrieval
model = TFBlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco")
processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "an image of a cat"
inputs = processor(images=image, text=text, return_tensors="tf")
outputs = model(**inputs)
TFBlipForQuestionAnswering
class transformers.TFBlipForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (BlipConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text
decoder. The vision encoder will encode the input image, the text encoder will encode the input question together
with the encoding of the image, and the text decoder will output the answer to the question.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
input_ids: tf.Tensor
pixel_values: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
labels: tf.Tensor | None = None
return_dict: Optional[bool] = None
training: Optional[bool] = None
)
→
transformers.models.blip.modeling_tf_blip.TFBlipTextVisionModelOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
BlipImageProcessor. See BlipImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip.modeling_tf_blip.TFBlipTextVisionModelOutput or tuple(tf.Tensor)
A transformers.models.blip.modeling_tf_blip.TFBlipTextVisionModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder.
image_embeds (tf.Tensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBlipForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFBlipForQuestionAnswering
model = TFBlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# training
text = "How many cats are in the picture?"
label = "2"
inputs = processor(images=image, text=text, return_tensors="tf")
labels = processor(text=label, return_tensors="tf").input_ids
inputs["labels"] = labels
outputs = model(**inputs)
loss = outputs.loss
# inference
text = "How many cats are in the picture?"
inputs = processor(images=image, text=text, return_tensors="tf")
outputs = model.generate(**inputs)
print(processor.decode(outputs[0], skip_special_tokens=True))
2
←AltCLIP
BLIP-2→
BLIP
Overview
Resources
BlipConfig
BlipTextConfig
BlipVisionConfig
BlipProcessor
BlipImageProcessor
BlipModel
BlipTextModel
BlipVisionModel
BlipForConditionalGeneration
BlipForImageTextRetrieval
BlipForQuestionAnswering
TFBlipModel
TFBlipTextModel
TFBlipVisionModel
TFBlipForConditionalGeneration
TFBlipForImageTextRetrieval
TFBlipForQuestionAnswering
|
MaskFormer
This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification.
The abstract from the paper is the following:
Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
Tips:
MaskFormer’s Transformer decoder is identical to the decoder of DETR. During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter use_auxilary_loss of MaskFormerConfig to True, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters).
If you want to train the model in a distributed environment across multiple nodes, then one should update the
get_num_masks function inside in the MaskFormerLoss class of modeling_maskformer.py. When training on multiple nodes, this should be
set to the average number of target masks across all nodes, as can be seen in the original implementation here.
One can use MaskFormerImageProcessor to prepare images for the model and optional targets for the model.
To get the final segmentation, depending on the task, you can call post_process_semantic_segmentation() or post_process_panoptic_segmentation(). Both tasks can be solved using MaskFormerForInstanceSegmentation output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together.
The figure below illustrates the architecture of MaskFormer. Taken from the original paper.
This model was contributed by francesco. The original code can be found here.
Resources
Image Segmentation
All notebooks that illustrate inference as well as fine-tuning on custom data with MaskFormer can be found here.
MaskFormer specific outputs
class transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput
<
source
>
(
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
pixel_decoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
transformer_decoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
pixel_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
transformer_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Last hidden states (final feature map) of the last stage of the encoder model (backbone).
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).
transformer_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Last hidden states (final feature map) of the last stage of the transformer decoder model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
hidden_states tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor containing encoder_hidden_states, pixel_decoder_hidden_states and
decoder_hidden_states
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads.
Class for outputs of MaskFormerModel. This class returns all the needed hidden states to compute the logits.
class transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
class_queries_logits: FloatTensor = None
masks_queries_logits: FloatTensor = None
auxiliary_logits: FloatTensor = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
pixel_decoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
transformer_decoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
pixel_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
transformer_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.Tensor, optional) —
The computed loss, returned when labels are present.
class_queries_logits (torch.FloatTensor) —
A tensor of shape (batch_size, num_queries, num_labels + 1) representing the proposed classes for each
query. Note the + 1 is needed because we incorporate the null class.
masks_queries_logits (torch.FloatTensor) —
A tensor of shape (batch_size, num_queries, height, width) representing the proposed masks for each
query.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Last hidden states (final feature map) of the last stage of the encoder model (backbone).
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).
transformer_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Last hidden states (final feature map) of the last stage of the transformer decoder model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the transformer decoder at the output
of each stage.
hidden_states tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor containing encoder_hidden_states, pixel_decoder_hidden_states and
decoder_hidden_states.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads.
Class for outputs of MaskFormerForInstanceSegmentation.
This output can be directly passed to post_process_semantic_segmentation() or or
post_process_instance_segmentation() or
post_process_panoptic_segmentation() depending on the task. Please, see
[`~MaskFormerImageProcessor] for details regarding usage.
MaskFormerConfig
class transformers.MaskFormerConfig
<
source
>
(
fpn_feature_size: int = 256
mask_feature_size: int = 256
no_object_weight: float = 0.1
use_auxiliary_loss: bool = False
backbone_config: typing.Optional[typing.Dict] = None
decoder_config: typing.Optional[typing.Dict] = None
init_std: float = 0.02
init_xavier_std: float = 1.0
dice_weight: float = 1.0
cross_entropy_weight: float = 1.0
mask_weight: float = 20.0
output_auxiliary_logits: typing.Optional[bool] = None
**kwargs
)
Parameters
mask_feature_size (int, optional, defaults to 256) —
The masks’ features size, this value will also be used to specify the Feature Pyramid Network features’
size.
no_object_weight (float, optional, defaults to 0.1) —
Weight to apply to the null (no object) class.
use_auxiliary_loss(bool, optional, defaults to False) —
If True MaskFormerForInstanceSegmentationOutput will contain the auxiliary losses computed using the
logits from each decoder’s stage.
backbone_config (Dict, optional) —
The configuration passed to the backbone, if unset, the configuration corresponding to
swin-base-patch4-window12-384 will be used.
decoder_config (Dict, optional) —
The configuration passed to the transformer decoder model, if unset the base config for detr-resnet-50
will be used.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (float, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
dice_weight (float, optional, defaults to 1.0) —
The weight for the dice loss.
cross_entropy_weight (float, optional, defaults to 1.0) —
The weight for the cross entropy loss.
mask_weight (float, optional, defaults to 20.0) —
The weight for the mask loss.
output_auxiliary_logits (bool, optional) —
Should the model output its auxiliary_logits or not.
Raises
ValueError
ValueError —
Raised if the backbone model type selected is not in ["swin"] or the decoder model type selected is not
in ["detr"]
This is the configuration class to store the configuration of a MaskFormerModel. It is used to instantiate a
MaskFormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MaskFormer
facebook/maskformer-swin-base-ade architecture trained
on ADE20k-150.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Currently, MaskFormer only supports the Swin Transformer as backbone.
Examples:
Copied
from transformers import MaskFormerConfig, MaskFormerModel
# Initializing a MaskFormer facebook/maskformer-swin-base-ade configuration
configuration = MaskFormerConfig()
# Initializing a model (with random weights) from the facebook/maskformer-swin-base-ade style configuration
model = MaskFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
from_backbone_and_decoder_configs
<
source
>
(
backbone_config: PretrainedConfig
decoder_config: PretrainedConfig
**kwargs
)
→
MaskFormerConfig
Parameters
backbone_config (PretrainedConfig) —
The backbone configuration.
decoder_config (PretrainedConfig) —
The transformer decoder configuration to use.
Returns
MaskFormerConfig
An instance of a configuration object
Instantiate a MaskFormerConfig (or a derived class) from a pre-trained backbone model configuration and DETR model
configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
MaskFormerImageProcessor
class transformers.MaskFormerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
size_divisor: int = 32
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: float = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
ignore_index: typing.Optional[int] = None
do_reduce_labels: bool = False
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the input to a certain size.
size (int, optional, defaults to 800) —
Resize the input to the given size. Only has an effect if do_resize is set to True. If size is a
sequence like (width, height), output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).
max_size (int, optional, defaults to 1333) —
The largest size an image dimension can have (otherwise it’s capped). Only has an effect if do_resize is
set to True.
resample (int, optional, defaults to PIL.Image.Resampling.BILINEAR) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST,
PIL.Image.Resampling.BOX, PIL.Image.Resampling.BILINEAR, PIL.Image.Resampling.HAMMING,
PIL.Image.Resampling.BICUBIC or PIL.Image.Resampling.LANCZOS. Only has an effect if do_resize is set
to True.
size_divisor (int, optional, defaults to 32) —
Some backbones need images divisible by a certain number. If not passed, it defaults to the value used in
Swin Transformer.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the input to a certain scale.
rescale_factor (float, optional, defaults to 1/ 255) —
Rescale the input by the given factor. Only has an effect if do_rescale is set to True.
do_normalize (bool, optional, defaults to True) —
Whether or not to normalize the input with mean and standard deviation.
image_mean (int, optional, defaults to [0.485, 0.456, 0.406]) —
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean.
image_std (int, optional, defaults to [0.229, 0.224, 0.225]) —
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std.
ignore_index (int, optional) —
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
denoted with 0 (background) will be replaced with ignore_index.
do_reduce_labels (bool, optional, defaults to False) —
Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k).
The background label will be replaced by ignore_index.
Constructs a MaskFormer image processor. The image processor can be used to prepare image(s) and optional targets
for the model.
This image processor inherits from BaseImageProcessor which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None
instance_id_to_semantic_id: typing.Union[typing.Dict[int, int], NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
size_divisor: typing.Optional[int] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
ignore_index: typing.Optional[int] = None
do_reduce_labels: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
encode_inputs
<
source
>
(
pixel_values_list: typing.List[typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] = None
instance_id_to_semantic_id: typing.Union[typing.List[typing.Dict[int, int]], typing.Dict[int, int], NoneType] = None
ignore_index: typing.Optional[int] = None
reduce_labels: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
)
→
BatchFeature
Parameters
pixel_values_list (List[ImageInput]) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (channels, height, width).
segmentation_maps (ImageInput, optional) —
The corresponding semantic segmentation maps with the pixel-wise annotations.
(bool, optional, defaults to True):
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
instance_id_to_semantic_id (List[Dict[int, int]] or Dict[int, int], optional) —
A mapping between object instance ids and class ids. If passed, segmentation_maps is treated as an
instance segmentation map where each pixel represents an instance id. Can be provided as a single
dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map
instance ids in each image separately.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt', return PyTorch torch.Tensor
objects.
Returns
BatchFeature
A BatchFeature with the following fields:
pixel_values — Pixel values to be fed to a model.
pixel_mask — Pixel mask to be fed to a model (when =True or if pixel_mask is in
self.model_input_names).
mask_labels — Optional list of mask labels of shape (labels, height, width) to be fed to a model
(when annotations are provided).
class_labels — Optional list of class labels of shape (labels) to be fed to a model (when
annotations are provided). They identify the labels of mask_labels, e.g. the label of
mask_labels[i][j] if class_labels[i][j].
Pad images up to the largest image in a batch and create a corresponding pixel_mask.
MaskFormer addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps
will be converted to lists of binary masks and their respective labels. Let’s see an example, assuming
segmentation_maps = [[2,6,7,9]], the output will contain mask_labels = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]] (four binary masks) and class_labels = [2,6,7,9], the labels for
each mask.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[torch.Tensor]
Parameters
outputs (MaskFormerForInstanceSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MaskFormerForInstanceSegmentation into semantic segmentation maps. Only supports
PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
return_binary_maps: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (MaskFormerForInstanceSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
return_coco_annotation (bool, optional, defaults to False) —
If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format.
return_binary_maps (bool, optional, defaults to False) —
If set to True, segmentation maps are returned as a concatenated tensor of binary segmentation maps
(one per detected instance).
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of MaskFormerForInstanceSegmentationOutput into instance segmentation predictions. Only
supports PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (MaskFormerForInstanceSegmentationOutput) —
The outputs from MaskFormerForInstanceSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id, set
to None if no mask if found above threshold. If target_sizes is specified, segmentation is resized
to the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of MaskFormerForInstanceSegmentationOutput into image panoptic segmentation
predictions. Only supports PyTorch.
MaskFormerFeatureExtractor
class transformers.MaskFormerFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
segmentation_maps = None
**kwargs
)
encode_inputs
<
source
>
(
pixel_values_list: typing.List[typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] = None
instance_id_to_semantic_id: typing.Union[typing.List[typing.Dict[int, int]], typing.Dict[int, int], NoneType] = None
ignore_index: typing.Optional[int] = None
reduce_labels: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
)
→
BatchFeature
Parameters
pixel_values_list (List[ImageInput]) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (channels, height, width).
segmentation_maps (ImageInput, optional) —
The corresponding semantic segmentation maps with the pixel-wise annotations.
(bool, optional, defaults to True):
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
instance_id_to_semantic_id (List[Dict[int, int]] or Dict[int, int], optional) —
A mapping between object instance ids and class ids. If passed, segmentation_maps is treated as an
instance segmentation map where each pixel represents an instance id. Can be provided as a single
dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map
instance ids in each image separately.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt', return PyTorch torch.Tensor
objects.
Returns
BatchFeature
A BatchFeature with the following fields:
pixel_values — Pixel values to be fed to a model.
pixel_mask — Pixel mask to be fed to a model (when =True or if pixel_mask is in
self.model_input_names).
mask_labels — Optional list of mask labels of shape (labels, height, width) to be fed to a model
(when annotations are provided).
class_labels — Optional list of class labels of shape (labels) to be fed to a model (when
annotations are provided). They identify the labels of mask_labels, e.g. the label of
mask_labels[i][j] if class_labels[i][j].
Pad images up to the largest image in a batch and create a corresponding pixel_mask.
MaskFormer addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps
will be converted to lists of binary masks and their respective labels. Let’s see an example, assuming
segmentation_maps = [[2,6,7,9]], the output will contain mask_labels = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]] (four binary masks) and class_labels = [2,6,7,9], the labels for
each mask.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[torch.Tensor]
Parameters
outputs (MaskFormerForInstanceSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MaskFormerForInstanceSegmentation into semantic segmentation maps. Only supports
PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
return_binary_maps: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (MaskFormerForInstanceSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
return_coco_annotation (bool, optional, defaults to False) —
If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format.
return_binary_maps (bool, optional, defaults to False) —
If set to True, segmentation maps are returned as a concatenated tensor of binary segmentation maps
(one per detected instance).
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of MaskFormerForInstanceSegmentationOutput into instance segmentation predictions. Only
supports PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (MaskFormerForInstanceSegmentationOutput) —
The outputs from MaskFormerForInstanceSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id, set
to None if no mask if found above threshold. If target_sizes is specified, segmentation is resized
to the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of MaskFormerForInstanceSegmentationOutput into image panoptic segmentation
predictions. Only supports PyTorch.
MaskFormerModel
class transformers.MaskFormerModel
<
source
>
(
config: MaskFormerConfig
)
Parameters
config (MaskFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MaskFormer Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: Tensor
pixel_mask: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MaskFormerImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers.
return_dict (bool, optional) —
Whether or not to return a ~MaskFormerModelOutput instead of a plain tuple.
Returns
transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput or tuple(torch.FloatTensor)
A transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MaskFormerConfig) and inputs.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Last hidden states (final feature map) of the last stage of the encoder model (backbone).
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).
transformer_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Last hidden states (final feature map) of the last stage of the transformer decoder model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
hidden_states tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor containing encoder_hidden_states, pixel_decoder_hidden_states and
decoder_hidden_states
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads.
The MaskFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, MaskFormerModel
from PIL import Image
import requests
# load MaskFormer fine-tuned on ADE20k semantic segmentation
image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-ade")
model = MaskFormerModel.from_pretrained("facebook/maskformer-swin-base-ade")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# the decoder of MaskFormer outputs hidden states of shape (batch_size, num_queries, hidden_size)
transformer_decoder_last_hidden_state = outputs.transformer_decoder_last_hidden_state
list(transformer_decoder_last_hidden_state.shape)
[1, 100, 256]
MaskFormerForInstanceSegmentation
class transformers.MaskFormerForInstanceSegmentation
<
source
>
(
config: MaskFormerConfig
)
forward
<
source
>
(
pixel_values: Tensor
mask_labels: typing.Optional[typing.List[torch.Tensor]] = None
class_labels: typing.Optional[typing.List[torch.Tensor]] = None
pixel_mask: typing.Optional[torch.Tensor] = None
output_auxiliary_logits: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MaskFormerImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers.
return_dict (bool, optional) —
Whether or not to return a ~MaskFormerModelOutput instead of a plain tuple.
mask_labels (List[torch.Tensor], optional) —
List of mask labels of shape (num_labels, height, width) to be fed to a model
class_labels (List[torch.LongTensor], optional) —
list of target class labels of shape (num_labels, height, width) to be fed to a model. They identify the
labels of mask_labels, e.g. the label of mask_labels[i][j] if class_labels[i][j].
Returns
transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MaskFormerConfig) and inputs.
loss (torch.Tensor, optional) — The computed loss, returned when labels are present.
class_queries_logits (torch.FloatTensor) — A tensor of shape (batch_size, num_queries, num_labels + 1) representing the proposed classes for each
query. Note the + 1 is needed because we incorporate the null class.
masks_queries_logits (torch.FloatTensor) — A tensor of shape (batch_size, num_queries, height, width) representing the proposed masks for each
query.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Last hidden states (final feature map) of the last stage of the encoder model (backbone).
pixel_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).
transformer_decoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Last hidden states (final feature map) of the last stage of the transformer decoder model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the transformer decoder at the output
of each stage.
hidden_states tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor containing encoder_hidden_states, pixel_decoder_hidden_states and
decoder_hidden_states.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads.
The MaskFormerForInstanceSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Semantic segmentation example:
Copied
from transformers import AutoImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on ADE20k semantic segmentation
image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-ade")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
url = (
... "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
... )
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to image_processor for postprocessing
predicted_semantic_map = image_processor.post_process_semantic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
list(predicted_semantic_map.shape)
[512, 683]
Panoptic segmentation example:
Copied
from transformers import AutoImageProcessor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to image_processor for postprocessing
result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
list(predicted_panoptic_map.shape)
[480, 640]
←Mask2Former
MobileNetV1→
MaskFormer
Overview
Resources
MaskFormer specific outputs
MaskFormerConfig
MaskFormerImageProcessor
MaskFormerFeatureExtractor
MaskFormerModel
MaskFormerForInstanceSegmentation
|
GPT Neo
Overview
The GPTNeo model was released in the EleutherAI/gpt-neo repository by Sid
Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the
Pile dataset.
The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of
256 tokens.
This model was contributed by valhalla.
Generation
The generate() method can be used to generate text using GPT Neo model.
Copied
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = (
... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the "
... "researchers was the fact that the unicorns spoke perfect English."
... )
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Documentation resources
Text classification task guide
Causal language modeling task guide
GPTNeoConfig
class transformers.GPTNeoConfig
<
source
>
(
vocab_size = 50257
max_position_embeddings = 2048
hidden_size = 2048
num_layers = 24
attention_types = [[['global', 'local'], 12]]
num_heads = 16
intermediate_size = None
window_size = 256
activation_function = 'gelu_new'
resid_dropout = 0.0
embed_dropout = 0.0
attention_dropout = 0.0
classifier_dropout = 0.1
layer_norm_epsilon = 1e-05
initializer_range = 0.02
use_cache = True
bos_token_id = 50256
eos_token_id = 50256
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50257) —
Vocabulary size of the GPT Neo model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling GPTNeoModel. Vocabulary size of the model. Defines the different
tokens that can be represented by the inputs_ids passed to the forward method of GPTNeoModel.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_size (int, optional, defaults to 2048) —
Dimensionality of the encoder layers and the pooler layer.
num_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
attention_types (List, optional, defaults to [[["global", "local"], 12]]) —
The type of attention for each layer in a List of the following format [[["attention_type"], num_layerss]] e.g. for a 24 layer model [[["global"], 24]] or [[["global", "local"], 12]] Choose the
value of attention_type from ["global", "local"]
num_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 8192) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
window_size (int, optional, defaults to 256) —
The size of the sliding window for local attention.
activation_function (str or function, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
resid_dropout (float, optional, defaults to 0.0) —
Residual dropout used in the attention pattern.
embed_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
classifier_dropout (float, optional, defaults to 0.1) —
Argument used when doing token classification, used in the model GPTNeoForTokenClassification. The
dropout ratio for the hidden layer.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
bos_token_id (int, optional, defaults to 50256) —
The id of the beginning of sentence token in the vocabulary.
eos_token_id (int, optional, defaults to 50256) —
The id of the end of sentence token in the vocabulary.
This is the configuration class to store the configuration of a GPTNeoModel. It is used to instantiate a GPT
Neo model according to the specified arguments, defining the model architecture. Instantiating a configuration with
the defaults will yield a similar configuration to that of the GPTNeo
EleutherAI/gpt-neo-1.3B architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GPTNeoConfig, GPTNeoModel
# Initializing a GPTNeo EleutherAI/gpt-neo-1.3B style configuration
configuration = GPTNeoConfig()
# Initializing a model (with random weights) from the EleutherAI/gpt-neo-1.3B style configuration
model = GPTNeoModel(configuration)
# Accessing the model configuration
configuration = model.config
GPTNeoModel
class transformers.GPTNeoModel
<
source
>
(
config
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPT Neo Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The GPTNeoModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoModel
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = GPTNeoModel.from_pretrained("EleutherAI/gpt-neo-1.3B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
GPTNeoForCausalLM
class transformers.GPTNeoForCausalLM
<
source
>
(
config
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT Neo Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The GPTNeoForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, GPTNeoForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
GPTNeoForQuestionAnswering
class transformers.GPTNeoForQuestionAnswering
<
source
>
(
config
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-Neo Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-neo-1.3B instead of EleutherAI/gpt-neo-1.3B. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = GPTNeoForQuestionAnswering.from_pretrained("EleutherAI/gpt-neo-1.3B")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
GPTNeoForSequenceClassification
class transformers.GPTNeoForSequenceClassification
<
source
>
(
config
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPTNeo Model transformer with a sequence classification head on top (linear layer).
GPTNeoForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPTNeoForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = GPTNeoForSequenceClassification.from_pretrained("EleutherAI/gpt-neo-1.3B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPTNeoForSequenceClassification.from_pretrained("EleutherAI/gpt-neo-1.3B", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPTNeoForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = GPTNeoForSequenceClassification.from_pretrained("EleutherAI/gpt-neo-1.3B", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPTNeoForSequenceClassification.from_pretrained(
... "EleutherAI/gpt-neo-1.3B", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
GPTNeoForTokenClassification
class transformers.GPTNeoForTokenClassification
<
source
>
(
config
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GPT Neo model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m")
model = GPTNeoForTokenClassification.from_pretrained("EleutherAI/gpt-neo-125m")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.25
FlaxGPTNeoModel
class transformers.FlaxGPTNeoModel
<
source
>
(
config: GPTNeoConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare GPTNeo Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxGPTNeoPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxGPTNeoModel
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = FlaxGPTNeoModel.from_pretrained("EleutherAI/gpt-neo-1.3B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxGPTNeoForCausalLM
class transformers.FlaxGPTNeoForCausalLM
<
source
>
(
config: GPTNeoConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (GPTNeoConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The GPTNeo Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxGPTNeoPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxGPTNeoForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = FlaxGPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
←GPT
GPT NeoX→
GPT Neo
Overview
Generation
Documentation resources
GPTNeoConfig
GPTNeoModel
GPTNeoForCausalLM
GPTNeoForQuestionAnswering
GPTNeoForSequenceClassification
GPTNeoForTokenClassification
FlaxGPTNeoModel
FlaxGPTNeoForCausalLM
|
Jukebox
Overview
The Jukebox model was proposed in Jukebox: A generative model for music
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on
an artist, genres and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
As shown on the following figure, Jukebox is made of 3 priors which are decoder only models. They follow the architecture described in Generating Long Sequences with Sparse Transformers, modified to support longer context length.
First, a autoencoder is used to encode the text lyrics. Next, the first (also called top_prior) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an AudioConditionner module. TheAudioConditioner upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution.
The metadata such as artist, genre and timing are passed to each prior, in the form of a start token and positionnal embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio.
Tips:
This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what’s missing to have a full integration with the hugging face traineer!
This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use accelerate.
Contrary to the paper, the order of the priors goes from 0 to 1 as it felt more intuitive : we sample starting from 0.
Primed sampling (conditionning the sampling on raw audio) requires more memory than ancestral sampling and should be used with fp16 set to True.
This model was contributed by Arthur Zucker.
The original code can be found here.
JukeboxConfig
class transformers.JukeboxConfig
<
source
>
(
vqvae_config = None
prior_config_list = None
nb_priors = 3
sampling_rate = 44100
timing_dims = 64
min_duration = 0
max_duration = 600.0
max_nb_genres = 5
metadata_conditioning = True
**kwargs
)
Parameters
vqvae_config (JukeboxVQVAEConfig, optional) —
Configuration for the JukeboxVQVAE model.
prior_config_list (List[JukeboxPriorConfig], optional) —
List of the configs for each of the JukeboxPrior of the model. The original architecture uses 3 priors.
nb_priors (int, optional, defaults to 3) —
Number of prior models that will sequentially sample tokens. Each prior is conditional auto regressive
(decoder) model, apart from the top prior, which can include a lyric encoder. The available models were
trained using a top prior and 2 upsampler priors.
sampling_rate (int, optional, defaults to 44100) —
Sampling rate of the raw audio.
timing_dims (int, optional, defaults to 64) —
Dimensions of the JukeboxRangeEmbedding layer which is equivalent to traditional positional embedding
layer. The timing embedding layer converts the absolute and relative position in the currently sampled
audio to a tensor of length timing_dims that will be added to the music tokens.
min_duration (int, optional, defaults to 0) —
Minimum duration of the audios to generate
max_duration (float, optional, defaults to 600.0) —
Maximum duration of the audios to generate
max_nb_genres (int, optional, defaults to 5) —
Maximum number of genres that can be used to condition a single sample.
metadata_conditioning (bool, optional, defaults to True) —
Whether or not to use metadata conditioning, corresponding to the artist, the genre and the min/maximum
duration.
This is the configuration class to store the configuration of a JukeboxModel.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information. Instantiating a configuration with the defaults will
yield a similar configuration to that of
openai/jukebox-1b-lyrics architecture.
The downsampling and stride are used to determine downsampling of the input sequence. For example, downsampling =
(5,3), and strides = (2, 2) will downsample the audio by 2^5 = 32 to get the first level of codes, and 2**8 = 256
to get the second level codes. This is mostly true for training the top level prior and the upsamplers.
Example:
Copied
from transformers import JukeboxModel, JukeboxConfig
# Initializing a Jukebox configuration
configuration = JukeboxConfig()
# Initializing a model from the configuration
model = JukeboxModel(configuration)
# Accessing the model configuration
configuration = model.config
from_configs
<
source
>
(
prior_configs: typing.List[transformers.models.jukebox.configuration_jukebox.JukeboxPriorConfig]
vqvae_config: JukeboxVQVAEConfig
**kwargs
)
→
JukeboxConfig
Returns
JukeboxConfig
An instance of a configuration object
Instantiate a JukeboxConfig (or a derived class) from clip text model configuration and clip vision model
configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
JukeboxPriorConfig
class transformers.JukeboxPriorConfig
<
source
>
(
act_fn = 'quick_gelu'
level = 0
alignment_head = 2
alignment_layer = 68
attention_multiplier = 0.25
attention_pattern = 'enc_dec_with_lyrics'
attn_dropout = 0
attn_res_scale = False
blocks = 64
conv_res_scale = None
num_layers = 72
emb_dropout = 0
encoder_config = None
encoder_loss_fraction = 0.4
hidden_size = 2048
init_scale = 0.2
is_encoder_decoder = True
lyric_vocab_size = 80
mask = False
max_duration = 600
max_nb_genres = 1
merged_decoder = True
metadata_conditioning = True
metadata_dims = [604, 7898]
min_duration = 0
mlp_multiplier = 1.0
music_vocab_size = 2048
n_ctx = 6144
n_heads = 2
nb_relevant_lyric_tokens = 384
res_conv_depth = 3
res_conv_width = 128
res_convolution_multiplier = 1
res_dilation_cycle = None
res_dilation_growth_rate = 1
res_downs_t = [3, 2, 2]
res_strides_t = [2, 2, 2]
resid_dropout = 0
sampling_rate = 44100
spread = None
timing_dims = 64
zero_out = False
**kwargs
)
Parameters
act_fn (str, optional, defaults to "quick_gelu") —
Activation function.
alignment_head (int, optional, defaults to 2) —
Head that is responsible of the alignment between lyrics and music. Only used to compute the lyric to audio
alignment
alignment_layer (int, optional, defaults to 68) —
Index of the layer that is responsible of the alignment between lyrics and music. Only used to compute the
lyric to audio alignment
attention_multiplier (float, optional, defaults to 0.25) —
Multiplier coefficient used to define the hidden dimension of the attention layers. 0.25 means that
0.25*width of the model will be used.
attention_pattern (str, optional, defaults to "enc_dec_with_lyrics") —
Which attention pattern to use for the decoder/
attn_dropout (int, optional, defaults to 0) —
Dropout probability for the post-attention layer dropout in the decoder.
attn_res_scale (bool, optional, defaults to False) —
Whether or not to scale the residuals in the attention conditioner block.
blocks (int, optional, defaults to 64) —
Number of blocks used in the block_attn. A sequence of length seq_len is factored as [blocks, seq_len // blocks] in the JukeboxAttention layer.
conv_res_scale (int, optional) —
Whether or not to scale the residuals in the conditioner block. Since the top level prior does not have a
conditioner, the default value is to None and should not be modified.
num_layers (int, optional, defaults to 72) —
Number of layers of the transformer architecture.
emb_dropout (int, optional, defaults to 0) —
Embedding dropout used in the lyric decoder.
encoder_config (JukeboxPriorConfig, optional) —
Configuration of the encoder which models the prior on the lyrics.
encoder_loss_fraction (float, optional, defaults to 0.4) —
Multiplication factor used in front of the lyric encoder loss.
hidden_size (int, optional, defaults to 2048) —
Hidden dimension of the attention layers.
init_scale (float, optional, defaults to 0.2) —
Initialization scales for the prior modules.
is_encoder_decoder (bool, optional, defaults to True) —
Whether or not the prior is an encoder-decoder model. In case it is not, and nb_relevant_lyric_tokens is
greater than 0, the encoder args should be specified for the lyric encoding.
mask (bool, optional, defaults to False) —
Whether or not to mask the previous positions in the attention.
max_duration (int, optional, defaults to 600) —
Maximum supported duration of the generated song in seconds.
max_nb_genres (int, optional, defaults to 1) —
Maximum number of genres that can be used to condition the model.
merged_decoder (bool, optional, defaults to True) —
Whether or not the decoder and the encoder inputs are merged. This is used for the separated
encoder-decoder architecture
metadata_conditioning (bool, optional, defaults to True) —
Whether or not to condition on the artist and genre metadata.
metadata_dims (List[int], optional, defaults to [604, 7898]) —
Number of genres and the number of artists that were used to train the embedding layers of the prior
models.
min_duration (int, optional, defaults to 0) —
Minimum duration of the generated audio on which the model was trained.
mlp_multiplier (float, optional, defaults to 1.0) —
Multiplier coefficient used to define the hidden dimension of the MLP layers. 0.25 means that 0.25*width of
the model will be used.
music_vocab_size (int, optional, defaults to 2048) —
Number of different music tokens. Should be similar to the JukeboxVQVAEConfig.nb_discrete_codes.
n_ctx (int, optional, defaults to 6144) —
Number of context tokens for each prior. The context tokens are the music tokens that are attended to when
generating music tokens.
n_heads (int, optional, defaults to 2) —
Number of attention heads.
nb_relevant_lyric_tokens (int, optional, defaults to 384) —
Number of lyric tokens that are used when sampling a single window of length n_ctx
res_conv_depth (int, optional, defaults to 3) —
Depth of the JukeboxDecoderConvBock used to upsample the previously sampled audio in the
JukeboxMusicTokenConditioner.
res_conv_width (int, optional, defaults to 128) —
Width of the JukeboxDecoderConvBock used to upsample the previously sampled audio in the
JukeboxMusicTokenConditioner.
res_convolution_multiplier (int, optional, defaults to 1) —
Multiplier used to scale the hidden_dim of the JukeboxResConv1DBlock.
res_dilation_cycle (int, optional) —
Dilation cycle used to define the JukeboxMusicTokenConditioner. Usually similar to the ones used in the
corresponding level of the VQVAE. The first prior does not use it as it is not conditioned on upper level
tokens.
res_dilation_growth_rate (int, optional, defaults to 1) —
Dilation grow rate used between each convolutionnal block of the JukeboxMusicTokenConditioner
res_downs_t (List[int], optional, defaults to [3, 2, 2]) —
Downsampling rates used in the audio conditioning network
res_strides_t (List[int], optional, defaults to [2, 2, 2]) —
Striding used in the audio conditioning network
resid_dropout (int, optional, defaults to 0) —
Residual dropout used in the attention pattern.
sampling_rate (int, optional, defaults to 44100) —
Sampling rate used for training.
spread (int, optional) —
Spread used in the summary_spread_attention pattern
timing_dims (int, optional, defaults to 64) —
Dimension of the timing embedding.
zero_out (bool, optional, defaults to False) —
Whether or not to zero out convolution weights when initializing.
This is the configuration class to store the configuration of a JukeboxPrior. It is used to instantiate a
JukeboxPrior according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the top level prior from the
[openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox
-1b-lyrics) architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
JukeboxVQVAEConfig
class transformers.JukeboxVQVAEConfig
<
source
>
(
act_fn = 'relu'
nb_discrete_codes = 2048
commit = 0.02
conv_input_shape = 1
conv_res_scale = False
embed_dim = 64
hop_fraction = [0.125, 0.5, 0.5]
levels = 3
lmu = 0.99
multipliers = [2, 1, 1]
res_conv_depth = 4
res_conv_width = 32
res_convolution_multiplier = 1
res_dilation_cycle = None
res_dilation_growth_rate = 3
res_downs_t = [3, 2, 2]
res_strides_t = [2, 2, 2]
sample_length = 1058304
init_scale = 0.2
zero_out = False
**kwargs
)
Parameters
act_fn (str, optional, defaults to "relu") —
Activation function of the model.
nb_discrete_codes (int, optional, defaults to 2048) —
Number of codes of the VQVAE.
commit (float, optional, defaults to 0.02) —
Commit loss multiplier.
conv_input_shape (int, optional, defaults to 1) —
Number of audio channels.
conv_res_scale (bool, optional, defaults to False) —
Whether or not to scale the residuals of the JukeboxResConv1DBlock.
embed_dim (int, optional, defaults to 64) —
Embedding dimension of the codebook vectors.
hop_fraction (List[int], optional, defaults to [0.125, 0.5, 0.5]) —
Fraction of non-intersecting window used when continuing the sampling process.
levels (int, optional, defaults to 3) —
Number of hierarchical levels that used in the VQVAE.
lmu (float, optional, defaults to 0.99) —
Used in the codebook update, exponential moving average coefficient. For more detail refer to Appendix A.1
of the original VQVAE paper
multipliers (List[int], optional, defaults to [2, 1, 1]) —
Depth and width multipliers used for each level. Used on the res_conv_width and res_conv_depth
res_conv_depth (int, optional, defaults to 4) —
Depth of the encoder and decoder block. If no multipliers are used, this is the same for each level.
res_conv_width (int, optional, defaults to 32) —
Width of the encoder and decoder block. If no multipliers are used, this is the same for each level.
res_convolution_multiplier (int, optional, defaults to 1) —
Scaling factor of the hidden dimension used in the JukeboxResConv1DBlock.
res_dilation_cycle (int, optional) —
Dilation cycle value used in the JukeboxResnet. If an int is used, each new Conv1 block will have a depth
reduced by a power of res_dilation_cycle.
res_dilation_growth_rate (int, optional, defaults to 3) —
Resnet dilation growth rate used in the VQVAE (dilation_growth_rate ** depth)
res_downs_t (List[int], optional, defaults to [3, 2, 2]) —
Downsampling rate for each level of the hierarchical VQ-VAE.
res_strides_t (List[int], optional, defaults to [2, 2, 2]) —
Stride used for each level of the hierarchical VQ-VAE.
sample_length (int, optional, defaults to 1058304) —
Provides the max input shape of the VQVAE. Is used to compute the input shape of each level.
init_scale (float, optional, defaults to 0.2) —
Initialization scale.
zero_out (bool, optional, defaults to False) —
Whether or not to zero out convolution weights when initializing.
This is the configuration class to store the configuration of a JukeboxVQVAE. It is used to instantiate a
JukeboxVQVAE according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the VQVAE from
openai/jukebox-1b-lyrics architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
JukeboxTokenizer
class transformers.JukeboxTokenizer
<
source
>
(
artists_file
genres_file
lyrics_file
version = ['v3', 'v2', 'v2']
max_n_lyric_tokens = 512
n_genres = 5
unk_token = '<|endoftext|>'
**kwargs
)
Parameters
artists_file (str) —
Path to the vocabulary file which contains a mapping between artists and ids. The default file supports
both “v2” and “v3”
genres_file (str) —
Path to the vocabulary file which contain a mapping between genres and ids.
lyrics_file (str) —
Path to the vocabulary file which contains the accepted characters for the lyrics tokenization.
version (List[str], optional, default to ["v3", "v2", "v2"]) —
List of the tokenizer versions. The 5b-lyrics’s top level prior model was trained using v3 instead of
v2.
n_genres (int, optional, defaults to 1) —
Maximum number of genres to use for composition.
max_n_lyric_tokens (int, optional, defaults to 512) —
Maximum number of lyric tokens to keep.
unk_token (str, optional, defaults to "<|endoftext|>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Constructs a Jukebox tokenizer. Jukebox can be conditioned on 3 different inputs :
Artists, unique ids are associated to each artist from the provided dictionary.
Genres, unique ids are associated to each genre from the provided dictionary.
Lyrics, character based tokenization. Must be initialized with the list of characters that are inside the
vocabulary.
This tokenizer does not require training. It should be able to process a different number of inputs:
as the conditioning of the model can be done on the three different queries. If None is provided, defaults values will be used.:
Depending on the number of genres on which the model should be conditioned (n_genres).
Copied
from transformers import JukeboxTokenizer
tokenizer = JukeboxTokenizer.from_pretrained("openai/jukebox-1b-lyrics")
tokenizer("Alan Jackson", "Country Rock", "old town road")["input_ids"]
[tensor([[ 0, 0, 0, 6785, 546, 41, 38, 30, 76, 46, 41, 49,
40, 76, 44, 41, 27, 30]]), tensor([[ 0, 0, 0, 145, 0]]), tensor([[ 0, 0, 0, 145, 0]])]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
If nothing is provided, the genres and the artist will either be selected randomly or set to None
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to:
this superclass for more information regarding those methods.
However the code does not allow that and only supports composing from various genres.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
Parameters
save_directory (str) —
A path to the directory where to saved. It will be created if it doesn’t exist.
filename_prefix (Optional[str], optional) —
A prefix to add to the names of the files saved by the tokenizer.
Saves the tokenizer’s vocabulary dictionary to the provided save_directory.
JukeboxModel
class transformers.JukeboxModel
<
source
>
(
config
)
Parameters
config (JukeboxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare JUKEBOX Model used for music generation. 4 sampling techniques are supported : primed_sample, upsample,
continue_sample and ancestral_sample. It does not have a forward method as the training is not end to end. If
you want to fine-tune the model, it is recommended to use the JukeboxPrior class and train each prior
individually.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
ancestral_sample
<
source
>
(
labels
n_samples = 1
**sampling_kwargs
)
Parameters
labels (List[torch.LongTensor]) —
List of length n_sample, and shape (self.levels, 4 + self.config.max_nb_genre + lyric_sequence_length) metadata such as artist_id, genre_id and the full list of lyric tokens
which are used to condition the generation.
n_samples (int, optional, default to 1) —
Number of samples to be generated in parallel.
Generates music tokens based on the provided labels. Will start at the desired prior level and automatically upsample the sequence. If you want to create the audio, you should call model.decode(tokens)`, which will use
the VQ-VAE decoder to convert the music tokens to raw audio.
Example:
Copied
from transformers import AutoTokenizer, JukeboxModel, set_seed
model = JukeboxModel.from_pretrained("openai/jukebox-1b-lyrics", min_duration=0).eval()
tokenizer = AutoTokenizer.from_pretrained("openai/jukebox-1b-lyrics")
lyrics = "Hey, are you awake? Can you talk to me?"
artist = "Zac Brown Band"
genre = "Country"
metas = tokenizer(artist=artist, genres=genre, lyrics=lyrics)
set_seed(0)
music_tokens = model.ancestral_sample(metas.input_ids, sample_length=400)
with torch.no_grad():
... model.decode(music_tokens)[:, :10].squeeze(-1)
tensor([[-0.0219, -0.0679, -0.1050, -0.1203, -0.1271, -0.0936, -0.0396, -0.0405,
-0.0818, -0.0697]])
primed_sample
<
source
>
(
raw_audio
labels
**sampling_kwargs
)
Parameters
raw_audio (List[torch.Tensor] of length n_samples ) —
A list of raw audio that will be used as conditioning information for each samples that will be
generated.
labels (List[torch.LongTensor] of length n_sample, and shape (self.levels, self.config.max_nb_genre + lyric_sequence_length) —
List of metadata such as artist_id, genre_id and the full list of lyric tokens which are used to
condition the generation.
sampling_kwargs (Dict[Any]) —
Various additional sampling arguments that are used by the _sample function. A detail list of the
arguments can bee seen in the _sample function documentation.
Generate a raw audio conditioned on the provided raw_audio which is used as conditioning at each of the
generation levels. The audio is encoded to music tokens using the 3 levels of the VQ-VAE. These tokens are
used: as conditioning for each level, which means that no ancestral sampling is required.
continue_sample
<
source
>
(
music_tokens
labels
**sampling_kwargs
)
Parameters
music_tokens (List[torch.LongTensor] of length self.levels ) —
A sequence of music tokens which will be used as context to continue the sampling process. Should have
self.levels tensors, each corresponding to the generation at a certain level.
labels (List[torch.LongTensor] of length n_sample, and shape (self.levels, self.config.max_nb_genre + lyric_sequence_length) —
List of metadata such as artist_id, genre_id and the full list of lyric tokens which are used to
condition the generation.
sampling_kwargs (Dict[Any]) —
Various additional sampling arguments that are used by the _sample function. A detail list of the
arguments can bee seen in the _sample function documentation.
Generates a continuation of the previously generated tokens.
upsample
<
source
>
(
music_tokens
labels
**sampling_kwargs
)
Parameters
music_tokens (List[torch.LongTensor] of length self.levels ) —
A sequence of music tokens which will be used as context to continue the sampling process. Should have
self.levels tensors, each corresponding to the generation at a certain level.
labels (List[torch.LongTensor] of length n_sample, and shape (self.levels, self.config.max_nb_genre + lyric_sequence_length) —
List of metadata such as artist_id, genre_id and the full list of lyric tokens which are used to
condition the generation.
sampling_kwargs (Dict[Any]) —
Various additional sampling arguments that are used by the _sample function. A detail list of the
arguments can bee seen in the _sample function documentation.
Upsamples a sequence of music tokens using the prior at level level.
_sample
<
source
>
(
music_tokens
labels
sample_levels
metas = None
chunk_size = 32
sampling_temperature = 0.98
lower_batch_size = 16
max_batch_size = 16
sample_length_in_seconds = 24
compute_alignments = False
sample_tokens = None
offset = 0
save_results = True
sample_length = None
)
Parameters
music_tokens (List[torch.LongTensor]) —
A sequence of music tokens of length self.levels which will be used as context to continue the
sampling process. Should have self.levels tensors, each corresponding to the generation at a certain
level.
labels (List[torch.LongTensor]) —
List of length n_sample, and shape (self.levels, 4 + self.config.max_nb_genre + lyric_sequence_length) metadata such as artist_id, genre_id and the full list of lyric tokens
which are used to condition the generation.
sample_levels (List[int]) —
List of the desired levels at which the sampling will be done. A level is equivalent to the index of
the prior in the list of priors
metas (List[Any], optional) —
Metadatas used to generate the labels
chunk_size (int, optional, defaults to 32) —
Size of a chunk of audio, used to fill up the memory in chuncks to prevent OOM erros. Bigger chunks
means faster memory filling but more consumption.
sampling_temperature (float, optional, defaults to 0.98) —
Temperature used to ajust the randomness of the sampling.
lower_batch_size (int, optional, defaults to 16) —
Maximum batch size for the lower level priors
max_batch_size (int, optional, defaults to 16) —
Maximum batch size for the top level priors
sample_length_in_seconds (int, optional, defaults to 24) —
Desired length of the generation in seconds
compute_alignments (bool, optional, defaults to False) —
Whether or not to compute the alignment between the lyrics and the audio using the top_prior
sample_tokens (int, optional) —
Precise number of tokens that should be sampled at each level. This is mostly useful for running dummy
experiments
offset (int, optional, defaults to 0) —
Audio offset used as conditioning, corresponds to the starting sample in the music. If the offset is
greater than 0, the lyrics will be shifted take that intoaccount
save_results (bool, optional, defaults to True) —
Whether or not to save the intermediate results. If True, will generate a folder named with the start
time.
sample_length (int, optional) —
Desired length of the generation in samples.
Core sampling function used to generate music tokens. Iterates over the provided list of levels, while saving
the generated raw audio at each step.
Returns: torch.Tensor
Example:
Copied
from transformers import AutoTokenizer, JukeboxModel, set_seed
import torch
metas = dict(artist="Zac Brown Band", genres="Country", lyrics="I met a traveller from an antique land")
tokenizer = AutoTokenizer.from_pretrained("openai/jukebox-1b-lyrics")
model = JukeboxModel.from_pretrained("openai/jukebox-1b-lyrics", min_duration=0).eval()
labels = tokenizer(**metas)["input_ids"]
set_seed(0)
zs = [torch.zeros(1, 0, dtype=torch.long) for _ in range(3)]
zs = model._sample(zs, labels, [0], sample_length=40 * model.priors[0].raw_to_tokens, save_results=False)
zs[0]
tensor([[1853, 1369, 1150, 1869, 1379, 1789, 519, 710, 1306, 1100, 1229, 519,
353, 1306, 1379, 1053, 519, 653, 1631, 1467, 1229, 1229, 10, 1647,
1254, 1229, 1306, 1528, 1789, 216, 1631, 1434, 653, 475, 1150, 1528,
1804, 541, 1804, 1434]])
JukeboxPrior
class transformers.JukeboxPrior
<
source
>
(
config: JukeboxPriorConfig
level = None
nb_priors = 3
vqvae_encoder = None
vqvae_decoder = None
)
Parameters
config (JukeboxPriorConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
level (int, optional) —
Current level of the Prior. Should be in range [0,nb_priors].
nb_priors (int, optional, defaults to 3) —
Total number of priors.
vqvae_encoder (Callable, optional) —
Encoding method of the VQVAE encoder used in the forward pass of the model. Passing functions instead of
the vqvae module to avoid getting the parameters.
vqvae_decoder (Callable, optional) —
Decoding method of the VQVAE decoder used in the forward pass of the model. Passing functions instead of
the vqvae module to avoid getting the parameters.
The JukeboxPrior class, which is a wrapper around the various conditioning and the transformer. JukeboxPrior can be
seen as language models trained on music. They model the next music token prediction task. If a (lyric) encoderù is defined, it also models the next character` prediction on the lyrics. Can be conditionned on timing, artist,
genre, lyrics and codes from lower-levels Priors.
sample
<
source
>
(
n_samples
music_tokens = None
music_tokens_conds = None
metadata = None
temp = 1.0
top_k = 0
top_p = 0.0
chunk_size = None
sample_tokens = None
)
Parameters
n_samples (int) —
Number of samples to generate.
music_tokens (List[torch.LongTensor], optional) —
Previously gemerated tokens at the current level. Used as context for the generation.
music_tokens_conds (List[torch.FloatTensor], optional) —
Upper-level music tokens generated by the previous prior model. Is None if the generation is not
conditionned on the upper-level tokens.
metadata (List[torch.LongTensor], optional) —
List containing the metatdata tensor with the artist, genre and the lyric tokens.
temp (float, optional, defaults to 1.0) —
Sampling temperature.
top_k (int, optional, defaults to 0) —
Top k probabilities used for filtering.
top_p (float, optional, defaults to 0.0) —
Top p probabilities used for filtering.
chunk_size (int, optional) —
Size of the chunks used to prepare the cache of the transformer.
sample_tokens (int, optional) —
Number of tokens to sample.
Ancestral/Prime sampling a window of tokens using the provided conditioning and metadatas.
forward
<
source
>
(
hidden_states: Tensor
metadata: typing.Optional[typing.List[torch.LongTensor]]
decode: typing.Optional[bool] = False
get_preds: typing.Optional[bool] = False
)
Parameters
hidden_states (torch.Tensor) —
Hidden states which should be raw audio
metadata (List[torch.LongTensor], optional) —
List containing the metadata conditioning tensorwith the lyric and the metadata tokens.
decode (bool, optional, defaults to False) —
Whether or not to decode the encoded to tokens.
get_preds (bool, optional, defaults to False) —
Whether or not to return the actual predicitons of the model.
Encode the hidden states using the vqvae encoder, and then predicts the next token in the forward_tokens
function. The loss is the sum of the encoder loss and the decoder loss.
JukeboxVQVAE
class transformers.JukeboxVQVAE
<
source
>
(
config: JukeboxVQVAEConfig
)
Parameters
config (JukeboxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Hierarchical VQ-VAE model used in Jukebox. This model follows the Hierarchical VQVAE paper from Will Williams, Sam
Ringer, Tom Ash, John Hughes, David MacLeod, Jamie Dougherty.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
raw_audio: FloatTensor
)
Parameters
raw_audio (torch.FloatTensor) —
Audio input which will be encoded and decoded.
Forward pass of the VQ-VAE, encodes the raw_audio to latent states, which are then decoded for each level.
The commit loss, which ensure that the encoder’s computed embeddings are close to the codebook vectors, is
computed.
Example:
Copied
from transformers import JukeboxVQVAE, set_seed
import torch
model = JukeboxVQVAE.from_pretrained("openai/jukebox-1b-lyrics").eval()
set_seed(0)
zs = [torch.randint(100, (4, 1))]
model.decode(zs).shape
torch.Size([4, 8, 1])
encode
<
source
>
(
input_audio
start_level = 0
end_level = None
bs_chunks = 1
)
Parameters
input_audio (torch.Tensor) —
Raw audio which will be encoded to its discrete representation using the codebook. The closest code
form the codebook will be computed for each sequence of samples.
start_level (int, optional, defaults to 0) —
Level at which the encoding process will start. Default to 0.
end_level (int, optional) —
Level at which the encoding process will start. Default to None.
bs_chunks (int, optional, defaults to 1) —
Number of chunks of raw audio to process at the same time.
Transforms the input_audio to a discrete representation made out of music_tokens.
decode
<
source
>
(
music_tokens
start_level = 0
end_level = None
bs_chunks = 1
)
Parameters
music_tokens (torch.LongTensor) —
Tensor of music tokens which will be decoded to raw audio by using the codebook. Each music token
should be an index to a corresponding code vector in the codebook.
start_level (int, optional) —
Level at which the decoding process will start. Default to 0.
end_level (int, optional) —
Level at which the decoding process will start. Default to None.
bs_chunks (int, optional) —
Number of chunks to process at the same time.
Transforms the input music_tokens to their raw_audio representation.
←I-BERT
LED→
Jukebox
Overview
JukeboxConfig
JukeboxPriorConfig
JukeboxVQVAEConfig
JukeboxTokenizer
JukeboxModel
JukeboxPrior
JukeboxVQVAE
|
BERT
Overview
The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It’s a
bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence
prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.
The abstract from the paper is the following:
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations
from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional
representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result,
the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models
for a wide range of tasks, such as question answering and language inference, without substantial task-specific
architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural
language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI
accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute
improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
Tips:
BERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation.
Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by:
a special mask token with probability 0.8
a random token different from the one masked with probability 0.1
the same token with probability 0.1
The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not.
This model was contributed by thomwolf. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on BERT Text Classification in a different language.
A notebook for Finetuning BERT (and friends) for multi-label text classification.
A notebook on how to Finetune BERT for multi-label classification using PyTorch. 🌎
A notebook on how to warm-start an EncoderDecoder model with BERT for summarization.
BertForSequenceClassification is supported by this example script and notebook.
TFBertForSequenceClassification is supported by this example script and notebook.
FlaxBertForSequenceClassification is supported by this example script and notebook.
Text classification task guide
Token Classification
A blog post on how to use Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition.
A notebook for Finetuning BERT for named-entity recognition using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this version of the notebook instead.
BertForTokenClassification is supported by this example script and notebook.
TFBertForTokenClassification is supported by this example script and notebook.
FlaxBertForTokenClassification is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide
Fill-Mask
BertForMaskedLM is supported by this example script and notebook.
TFBertForMaskedLM is supported by this example script and notebook.
FlaxBertForMaskedLM is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Question Answering
BertForQuestionAnswering is supported by this example script and notebook.
TFBertForQuestionAnswering is supported by this example script and notebook.
FlaxBertForQuestionAnswering is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice
BertForMultipleChoice is supported by this example script and notebook.
TFBertForMultipleChoice is supported by this example script and notebook.
Multiple choice task guide
⚡️ Inference
A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia.
A blog post on how to Accelerate BERT inference with DeepSpeed-Inference on GPUs.
⚙️ Pretraining
A blog post on Pre-Training BERT with Hugging Face Transformers and Habana Gaudi.
🚀 Deploy
A blog post on how to Convert Transformers to ONNX with Hugging Face Optimum.
A blog post on how to Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS.
A blog post on Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module.
A blog post on Serverless BERT with HuggingFace, AWS Lambda, and Docker.
A blog post on Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler.
A blog post on Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker.
BertConfig
class transformers.BertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BertModel or TFBertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling BertModel or TFBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a BertModel or a TFBertModel. It is used to
instantiate a BERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the BERT
bert-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import BertConfig, BertModel
# Initializing a BERT bert-base-uncased style configuration
configuration = BertConfig()
# Initializing a model (with random weights) from the bert-base-uncased style configuration
model = BertModel(configuration)
# Accessing the model configuration
configuration = model.config
BertTokenizer
class transformers.BertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a BERT tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
BertTokenizerFast
class transformers.BertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” BERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
TFBertTokenizer
class transformers.TFBertTokenizer
<
source
>
(
*args
**kwargs
)
Parameters
vocab_list (list) —
List containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
cls_token_id (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
sep_token_id (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token_id (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
padding (str, defaults to "longest") —
The type of padding to use. Can be either "longest", to pad only up to the longest sample in the batch,
or `“max_length”, to pad all inputs to the maximum length supported by the tokenizer.
truncation (bool, optional, defaults to True) —
Whether to truncate the sequence to the maximum length.
max_length (int, optional, defaults to 512) —
The maximum length of the sequence, used for padding (if padding is “max_length”) and/or truncation (if
truncation is True).
pad_to_multiple_of (int, optional, defaults to None) —
If set, the sequence will be padded to a multiple of this value.
return_token_type_ids (bool, optional, defaults to True) —
Whether to return token_type_ids.
return_attention_mask (bool, optional, defaults to True) —
Whether to return the attention_mask.
use_fast_bert_tokenizer (bool, optional, defaults to True) —
If True, will use the FastBertTokenizer class from Tensorflow Text. If False, will use the BertTokenizer
class instead. BertTokenizer supports some additional options, but is slower and cannot be exported to
TFLite.
This is an in-graph tokenizer for BERT. It should be initialized similarly to other tokenizers, using the
from_pretrained() method. It can also be initialized with the from_tokenizer() method, which imports settings
from an existing standard tokenizer object.
In-graph tokenizers, unlike other Hugging Face tokenizers, are actually Keras layers and are designed to be run
when the model is called, rather than during preprocessing. As a result, they have somewhat more limited options
than standard tokenizer classes. They are most useful when you want to create an end-to-end model that goes
straight from tf.string inputs to outputs.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
*init_inputs
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
The name or path to the pre-trained tokenizer.
Instantiate a TFBertTokenizer from a pre-trained tokenizer.
Examples:
Copied
from transformers import TFBertTokenizer
tf_tokenizer = TFBertTokenizer.from_pretrained("bert-base-uncased")
from_tokenizer
<
source
>
(
tokenizer: PreTrainedTokenizerBase
**kwargs
)
Parameters
tokenizer (PreTrainedTokenizerBase) —
The tokenizer to use to initialize the TFBertTokenizer.
Initialize a TFBertTokenizer from an existing Tokenizer.
Examples:
Copied
from transformers import AutoTokenizer, TFBertTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tf_tokenizer = TFBertTokenizer.from_tokenizer(tokenizer)
Bert specific outputs
class transformers.models.bert.modeling_bert.BertForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
prediction_logits: FloatTensor = None
seq_relationship_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of BertForPreTraining.
class transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput
<
source
>
(
loss: tf.Tensor | None = None
prediction_logits: tf.Tensor = None
seq_relationship_logits: tf.Tensor = None
hidden_states: Optional[Union[Tuple[tf.Tensor], tf.Tensor]] = None
attentions: Optional[Union[Tuple[tf.Tensor], tf.Tensor]] = None
)
Parameters
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (tf.Tensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFBertForPreTraining.
class transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput
<
source
>
(
prediction_logits: Array = None
seq_relationship_logits: Array = None
hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None
attentions: typing.Optional[typing.Tuple[jax.Array]] = None
)
Parameters
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (jnp.ndarray of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of BertForPreTraining.
replace
<
source
>
(
**updates
)
“Returns a new object replacing the specified fields with new values.
BertModel
class transformers.BertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The BertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = BertModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BertForPreTraining
class transformers.BertForPreTraining
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
next_sentence_label: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.bert.modeling_bert.BertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional):
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence
pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}):
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.bert.modeling_bert.BertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.bert.modeling_bert.BertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = BertForPreTraining.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
BertLMHeadModel
class transformers.BertLMHeadModel
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The BertLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BertLMHeadModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = BertLMHeadModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
BertForMaskedLM
class transformers.BertForMaskedLM
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = BertForMaskedLM.from_pretrained("bert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.88
BertForNextSentencePrediction
class transformers.BertForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = BertForNextSentencePrediction.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
BertForSequenceClassification
class transformers.BertForSequenceClassification
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'LABEL_1'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.01
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BertForSequenceClassification.from_pretrained(
... "textattack/bert-base-uncased-yelp-polarity", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
BertForMultipleChoice
class transformers.BertForMultipleChoice
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = BertForMultipleChoice.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
BertForTokenClassification
class transformers.BertForTokenClassification
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
model = BertForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['O', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.01
BertForQuestionAnswering
class transformers.BertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("deepset/bert-base-cased-squad2")
model = BertForQuestionAnswering.from_pretrained("deepset/bert-base-cased-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'a nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
7.41
TFBertModel
class transformers.TFBertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFBertModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFBertForPreTraining
class transformers.TFBertForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with two heads on top as done during the pretraining:
a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
next_sentence_label: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.bert.modeling_tf_bert.TFBertForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
prediction_logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFBertForPreTraining
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFBertForPreTraining.from_pretrained("bert-base-uncased")
input_ids = tokenizer("Hello, my dog is cute", add_special_tokens=True, return_tensors="tf")
# Batch size 1
outputs = model(input_ids)
prediction_logits, seq_relationship_logits = outputs[:2]
TFBertModelLMHeadModel
class transformers.TFBertLMHeadModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers)
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True):
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional):
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Example:
Copied
from transformers import AutoTokenizer, TFBertLMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFBertLMHeadModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFBertForMaskedLM
class transformers.TFBertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBertForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFBertForMaskedLM.from_pretrained("bert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.88
TFBertForNextSentencePrediction
class transformers.TFBertForNextSentencePrediction
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
next_sentence_label: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFNextSentencePredictorOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when next_sentence_label is provided) — Next sentence prediction loss.
logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFBertForNextSentencePrediction
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFBertForNextSentencePrediction.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="tf")
logits = model(encoding["input_ids"], token_type_ids=encoding["token_type_ids"])[0]
assert logits[0][0] < logits[0][1] # the next sentence was random
TFBertForSequenceClassification
class transformers.TFBertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/bert-base-uncased-yelp-polarity")
model = TFBertForSequenceClassification.from_pretrained("ydshieh/bert-base-uncased-yelp-polarity")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'LABEL_1'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFBertForSequenceClassification.from_pretrained("ydshieh/bert-base-uncased-yelp-polarity", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
0.01
TFBertForMultipleChoice
class transformers.TFBertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFBertForMultipleChoice.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFBertForTokenClassification
class transformers.TFBertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
model = TFBertForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_tokens_classes
['O', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'I-LOC', 'I-LOC']
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
round(float(loss), 2)
0.01
TFBertForQuestionAnswering
class transformers.TFBertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/bert-base-cased-squad2")
model = TFBertForQuestionAnswering.from_pretrained("ydshieh/bert-base-cased-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
'a nice puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
7.41
FlaxBertModel
class transformers.FlaxBertModel
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxBertForPreTraining
class transformers.FlaxBertForPreTraining
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForPreTraining
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForPreTraining.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
FlaxBertForCausalLM
class transformers.FlaxBertForCausalLM
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for
autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForCausalLM.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
FlaxBertForMaskedLM
class transformers.FlaxBertForMaskedLM
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForMaskedLM.from_pretrained("bert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxBertForNextSentencePrediction
class transformers.FlaxBertForNextSentencePrediction
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with a next sentence prediction (classification) head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForNextSentencePrediction
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForNextSentencePrediction.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="jax")
outputs = model(**encoding)
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
FlaxBertForSequenceClassification
class transformers.FlaxBertForSequenceClassification
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxBertForMultipleChoice
class transformers.FlaxBertForMultipleChoice
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForMultipleChoice.from_pretrained("bert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxBertForTokenClassification
class transformers.FlaxBertForTokenClassification
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForTokenClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxBertForQuestionAnswering
class transformers.FlaxBertForQuestionAnswering
<
source
>
(
config: BertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBertForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = FlaxBertForQuestionAnswering.from_pretrained("bert-base-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←BARTpho
BertGeneration→
BERT
Overview
Resources
BertConfig
BertTokenizer
BertTokenizerFast
TFBertTokenizer
Bert specific outputs
BertModel
BertForPreTraining
BertLMHeadModel
BertForMaskedLM
BertForNextSentencePrediction
BertForSequenceClassification
BertForMultipleChoice
BertForTokenClassification
BertForQuestionAnswering
TFBertModel
TFBertForPreTraining
TFBertModelLMHeadModel
TFBertForMaskedLM
TFBertForNextSentencePrediction
TFBertForSequenceClassification
TFBertForMultipleChoice
TFBertForTokenClassification
TFBertForQuestionAnswering
FlaxBertModel
FlaxBertForPreTraining
FlaxBertForCausalLM
FlaxBertForMaskedLM
FlaxBertForNextSentencePrediction
FlaxBertForSequenceClassification
FlaxBertForMultipleChoice
FlaxBertForTokenClassification
FlaxBertForQuestionAnswering
|
The documentation page MODEL_DOC/TRAJECTORY_TRANSFORMERS doesn’t exist in v4.31.0, but exists on the main version. Click here to redirect to the main version of the documentation. |
RoCBert
Overview
The RoCBert model was proposed in RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
It’s a pretrained Chinese language model that is robust under various forms of adversarial attacks.
The abstract from the paper is the following:
Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown
vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose
ROCBERT: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation,
synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency
under different synthesized adversarial examples. The model takes as input multimodal information including the
semantic, phonetic and visual features. We show all these features are important to the model robustness since the
attack can be performed in all the three forms. Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under
three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best
in the toxic content detection task under human-made attacks.
This model was contributed by weiweishi.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RoCBertConfig
class transformers.RoCBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
use_cache = True
pad_token_id = 0
position_embedding_type = 'absolute'
classifier_dropout = None
enable_pronunciation = True
enable_shape = True
pronunciation_embed_dim = 768
pronunciation_vocab_size = 910
shape_embed_dim = 512
shape_vocab_size = 24858
concat_input = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the RoCBert model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling RoCBertModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling RoCBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
enable_pronunciation (bool, optional, defaults to True) —
Whether or not the model use pronunciation embed when training.
enable_shape (bool, optional, defaults to True) —
Whether or not the model use shape embed when training.
pronunciation_embed_dim (int, optional, defaults to 768) —
Dimension of the pronunciation_embed.
pronunciation_vocab_size (int, optional, defaults to 910) —
Pronunciation Vocabulary size of the RoCBert model. Defines the number of different tokens that can be
represented by the input_pronunciation_ids passed when calling RoCBertModel.
shape_embed_dim (int, optional, defaults to 512) —
Dimension of the shape_embed.
shape_vocab_size (int, optional, defaults to 24858) —
Shape Vocabulary size of the RoCBert model. Defines the number of different tokens that can be represented
by the input_shape_ids passed when calling RoCBertModel.
concat_input (bool, optional, defaults to True) —
Defines the way of merging the shape_embed, pronunciation_embed and word_embed, if the value is true,
output_embed = torch.cat((word_embed, shape_embed, pronunciation_embed), -1), else output_embed =
(word_embed + shape_embed + pronunciation_embed) / 3
Example —
This is the configuration class to store the configuration of a RoCBertModel. It is used to instantiate a
RoCBert model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the RoCBert
weiweishi/roc-bert-base-zh architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import RoCBertModel, RoCBertConfig
# Initializing a RoCBert weiweishi/roc-bert-base-zh style configuration
configuration = RoCBertConfig()
# Initializing a model from the weiweishi/roc-bert-base-zh style configuration
model = RoCBertModel(configuration)
# Accessing the model configuration
configuration = model.config
RoCBertTokenizer
class transformers.RoCBertTokenizer
<
source
>
(
vocab_file
word_shape_file
word_pronunciation_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
Construct a RoCBert tokenizer. Based on WordPiece. This tokenizer inherits from PreTrainedTokenizer which —
contains most of the main methods. Users should refer to this superclass for more information regarding those —
methods. —
vocab_file (str):
File containing the vocabulary.
word_shape_file (str):
File containing the word => shape info.
word_pronunciation_file (str):
File containing the word => pronunciation info.
do_lower_case (bool, optional, defaults to True):
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True):
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional):
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]"):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]"):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]"):
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]"):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]"):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True):
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional):
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
cls_token_id: int = None
sep_token_id: int = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
RoCBertModel
class transformers.RoCBertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoCBert Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to be initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The RoCBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoCBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("weiweishi/roc-bert-base-zh")
model = RoCBertModel.from_pretrained("weiweishi/roc-bert-base-zh")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
RoCBertForPreTraining
class transformers.RoCBertForPreTraining
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model with contrastive loss and masked_lm_loss during the pretraining.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
attack_input_ids: typing.Optional[torch.Tensor] = None
attack_input_shape_ids: typing.Optional[torch.Tensor] = None
attack_input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attack_attention_mask: typing.Optional[torch.Tensor] = None
attack_token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels_input_ids: typing.Optional[torch.Tensor] = None
labels_input_shape_ids: typing.Optional[torch.Tensor] = None
labels_input_pronunciation_ids: typing.Optional[torch.Tensor] = None
labels_attention_mask: typing.Optional[torch.Tensor] = None
labels_token_type_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
attack_input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
attack sample ids for computing the contrastive loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
attack_input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
attack sample shape ids for computing the contrastive loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
attack_input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
attack sample pronunciation ids for computing the contrastive loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
labels_input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
target ids for computing the contrastive loss and masked_lm_loss . Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
labels_input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
target shape ids for computing the contrastive loss and masked_lm_loss . Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
labels_input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
target pronunciation ids for computing the contrastive loss and masked_lm_loss . Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}):
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoCBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoCBertForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("weiweishi/roc-bert-base-zh")
model = RoCBertForPreTraining.from_pretrained("weiweishi/roc-bert-base-zh")
inputs = tokenizer("你好,很高兴认识你", return_tensors="pt")
attack_inputs = {}
for key in list(inputs.keys()):
... attack_inputs[f"attack_{key}"] = inputs[key]
label_inputs = {}
for key in list(inputs.keys()):
... label_inputs[f"labels_{key}"] = inputs[key]
inputs.update(label_inputs)
inputs.update(attack_inputs)
outputs = model(**inputs)
logits = outputs.logits
logits.shape
torch.Size([1, 11, 21128])
RoCBertForCausalLM
class transformers.RoCBertForCausalLM
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model with a language modeling head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are
only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The RoCBertForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoCBertForCausalLM, RoCBertConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("weiweishi/roc-bert-base-zh")
config = RoCBertConfig.from_pretrained("weiweishi/roc-bert-base-zh")
config.is_decoder = True
model = RoCBertForCausalLM.from_pretrained("weiweishi/roc-bert-base-zh", config=config)
inputs = tokenizer("你好,很高兴认识你", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
RoCBertForMaskedLM
class transformers.RoCBertForMaskedLM
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Example —
The RoCBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
RoCBertForSequenceClassification
class transformers.RoCBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoCBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, RoCBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/dummy-rocbert-seq")
model = RoCBertForSequenceClassification.from_pretrained("ArthurZ/dummy-rocbert-seq")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'financial news'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RoCBertForSequenceClassification.from_pretrained("ArthurZ/dummy-rocbert-seq", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
2.31
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, RoCBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/dummy-rocbert-seq")
model = RoCBertForSequenceClassification.from_pretrained("ArthurZ/dummy-rocbert-seq", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RoCBertForSequenceClassification.from_pretrained(
... "ArthurZ/dummy-rocbert-seq", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
RoCBertForMultipleChoice
class transformers.RoCBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoCBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoCBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("weiweishi/roc-bert-base-zh")
model = RoCBertForMultipleChoice.from_pretrained("weiweishi/roc-bert-base-zh")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
RoCBertForTokenClassification
class transformers.RoCBertForTokenClassification
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoCBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoCBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/dummy-rocbert-ner")
model = RoCBertForTokenClassification.from_pretrained("ArthurZ/dummy-rocbert-ner")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['S-EVENT', 'S-FAC', 'I-ORDINAL', 'I-ORDINAL', 'E-ORG', 'E-LANGUAGE', 'E-ORG', 'E-ORG', 'E-ORG', 'E-ORG', 'I-EVENT', 'S-TIME', 'S-TIME', 'E-LANGUAGE', 'S-TIME', 'E-DATE', 'I-ORDINAL', 'E-QUANTITY', 'E-LANGUAGE', 'S-TIME', 'B-ORDINAL', 'S-PRODUCT', 'E-LANGUAGE', 'E-LANGUAGE', 'E-ORG', 'E-LOC', 'S-TIME', 'I-ORDINAL', 'S-FAC', 'O', 'S-GPE', 'I-EVENT', 'S-GPE', 'E-LANGUAGE', 'E-ORG', 'S-EVENT', 'S-FAC', 'S-FAC', 'S-FAC', 'E-ORG', 'S-FAC', 'E-ORG', 'S-GPE']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
3.62
RoCBertForQuestionAnswering
class transformers.RoCBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (RoCBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoCBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
input_shape_ids: typing.Optional[torch.Tensor] = None
input_pronunciation_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_shape_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the shape vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
input_pronunciation_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the pronunciation vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoCBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoCBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoCBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/dummy-rocbert-qa")
model = RoCBertForQuestionAnswering.from_pretrained("ArthurZ/dummy-rocbert-qa")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
''
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
3.75
←RoBERTa-PreLayerNorm
RoFormer→
RoCBert
Overview
Documentation resources
RoCBertConfig
RoCBertTokenizer
RoCBertModel
RoCBertForPreTraining
RoCBertForCausalLM
RoCBertForMaskedLM
RoCBertForSequenceClassification
RoCBertForMultipleChoice
RoCBertForTokenClassification
RoCBertForQuestionAnswering
|
Longformer
Overview
The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA.
Tips:
Since the Longformer is based on RoBERTa, it doesn’t have token_type_ids. You don’t need to indicate which
token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or
</s>).
A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the local attention section for more information.
This model was contributed by beltagy. The Authors’ code can be found here.
Longformer Self Attention
Longformer self attention employs self attention on both a “local” context and a “global” context. Most tokens only
attend “locally” to each other meaning that each token attends to its 12w\frac{1}{2} w21w previous tokens and
12w\frac{1}{2} w21w succeeding tokens with www being the window length as defined in
config.attention_window. Note that config.attention_window can be of type List to define a
different www for each layer. A selected few tokens attend “globally” to all other tokens, as it is
conventionally done for all tokens in BertSelfAttention.
Note that “locally” and “globally” attending tokens are projected by different query, key and value matrices. Also note
that every “locally” attending token not only attends to tokens within its window www, but also to all “globally”
attending tokens so that global attention is symmetric.
The user can define which tokens attend “locally” and which tokens attend “globally” by setting the tensor
global_attention_mask at run-time appropriately. All Longformer models employ the following logic for
global_attention_mask:
0: the token attends “locally”,
1: the token attends “globally”.
For more information please also refer to forward() method.
Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually
represents the memory and time bottleneck, can be reduced from O(ns×ns)\mathcal{O}(n_s \times n_s)O(ns×ns) to
O(ns×w)\mathcal{O}(n_s \times w)O(ns×w), with nsn_sns being the sequence length and www being the average window
size. It is assumed that the number of “globally” attending tokens is insignificant as compared to the number of
“locally” attending tokens.
For more information, please refer to the official paper.
Training
LongformerForMaskedLM is trained the exact same way RobertaForMaskedLM is
trained and should be used as follows:
Copied
input_ids = tokenizer.encode("This is a sentence from [MASK] training data", return_tensors="pt")
mlm_labels = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0]
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
LongformerConfig
class transformers.LongformerConfig
<
source
>
(
attention_window: typing.Union[typing.List[int], int] = 512
sep_token_id: int = 2
pad_token_id: int = 1
bos_token_id: int = 0
eos_token_id: int = 2
vocab_size: int = 30522
hidden_size: int = 768
num_hidden_layers: int = 12
num_attention_heads: int = 12
intermediate_size: int = 3072
hidden_act: str = 'gelu'
hidden_dropout_prob: float = 0.1
attention_probs_dropout_prob: float = 0.1
max_position_embeddings: int = 512
type_vocab_size: int = 2
initializer_range: float = 0.02
layer_norm_eps: float = 1e-12
onnx_export: bool = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Longformer model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling LongformerModel or TFLongformerModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling LongformerModel or
TFLongformerModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
attention_window (int or List[int], optional, defaults to 512) —
Size of an attention window around each token. If an int, use the same size for all layers. To specify a
different window size for each layer, use a List[int] where len(attention_window) == num_hidden_layers.
This is the configuration class to store the configuration of a LongformerModel or a TFLongformerModel. It
is used to instantiate a Longformer model according to the specified arguments, defining the model architecture.
This is the configuration class to store the configuration of a LongformerModel. It is used to instantiate an
Longformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the LongFormer
allenai/longformer-base-4096 architecture with a sequence
length 4,096.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import LongformerConfig, LongformerModel
# Initializing a Longformer configuration
configuration = LongformerConfig()
# Initializing a model from the configuration
model = LongformerModel(configuration)
# Accessing the model configuration
configuration = model.config
LongformerTokenizer
class transformers.LongformerTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Longformer tokenizer detect beginning of words by the preceding space).
Constructs a Longformer tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import LongformerTokenizer
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Longformer sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. Longformer does
not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
LongformerTokenizerFast
class transformers.LongformerTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Longformer tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” Longformer tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import LongformerTokenizerFast
tokenizer = LongformerTokenizerFast.from_pretrained("allenai/longformer-base-4096")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. Longformer does
not make use of token type ids, therefore a list of zeros is returned.
Longformer specific outputs
class transformers.models.longformer.modeling_longformer.LongformerBaseModelOutput
<
source
>
(
last_hidden_state: FloatTensor
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for Longformer’s outputs, with potential hidden states, local and global attentions.
class transformers.models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling
<
source
>
(
last_hidden_state: FloatTensor
pooler_output: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) —
Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for Longformer’s outputs that also contains a pooling of the last hidden states.
class transformers.models.longformer.modeling_longformer.LongformerMaskedLMOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for masked language models outputs.
class transformers.models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
start_logits: FloatTensor = None
end_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of question answering Longformer models.
class transformers.models.longformer.modeling_longformer.LongformerSequenceClassifierOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of sentence classification models.
class transformers.models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) —
num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of multiple choice Longformer models.
class transformers.models.longformer.modeling_longformer.LongformerTokenClassifierOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) —
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of token classification models.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput
<
source
>
(
last_hidden_state: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for Longformer’s outputs, with potential hidden states, local and global attentions.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutputWithPooling
<
source
>
(
last_hidden_state: tf.Tensor = None
pooler_output: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) —
Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for Longformer’s outputs that also contains a pooling of the last hidden states.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for masked language models outputs.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput
<
source
>
(
loss: tf.Tensor | None = None
start_logits: tf.Tensor = None
end_logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) —
Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) —
Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of question answering Longformer models.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of sentence classification models.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) —
num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of multiple choice models.
class transformers.models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) —
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of token classification models.
LongformerModel
class transformers.LongformerModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Longformer Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
This class copied code from RobertaModel and overwrote standard self-attention with longformer self-attention
to provide the ability to process long sequences following the self-attention approach described in Longformer:
the Long-Document Transformer by Iz Beltagy, Matthew E. Peters, and Arman Cohan.
Longformer self-attention combines a local (sliding window) and global attention to extend to long documents
without the O(n^2) increase in memory and compute.
The self-attention module LongformerSelfAttention implemented here supports the combination of local and global
attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and dilated
attention are more relevant for autoregressive language modeling than finetuning on downstream tasks. Future
release will add support for autoregressive attention, but the support for dilated attention requires a custom CUDA
kernel to be memory and compute efficient.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
global_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The LongformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import torch
from transformers import LongformerModel, AutoTokenizer
model = LongformerModel.from_pretrained("allenai/longformer-base-4096")
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
SAMPLE_TEXT = " ".join(["Hello world! "] * 1000) # long input document
input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
attention_mask = torch.ones(
... input_ids.shape, dtype=torch.long, device=input_ids.device
... ) # initialize to local attention
global_attention_mask = torch.zeros(
... input_ids.shape, dtype=torch.long, device=input_ids.device
... ) # initialize to global attention to be deactivated for all tokens
global_attention_mask[
... :,
... [
... 1,
... 4,
... 21,
... ],
... ] = 1 # Set global attention to random tokens for the sake of this example
# Usually, set global attention based on the task. For example,
# classification: the <s> token
# QA: question tokens
# LM: potentially on the beginning of sentences and paragraphs
outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)
sequence_output = outputs.last_hidden_state
pooled_output = outputs.pooler_output
LongformerForMaskedLM
class transformers.LongformerForMaskedLM
<
source
>
(
config
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
global_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.longformer.modeling_longformer.LongformerMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.longformer.modeling_longformer.LongformerMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.models.longformer.modeling_longformer.LongformerMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The LongformerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Mask filling example:
Copied
from transformers import AutoTokenizer, LongformerForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
model = LongformerForMaskedLM.from_pretrained("allenai/longformer-base-4096")
Let’s try a very long input.
Copied
TXT = (
... "My friends are <mask> but they eat too many carbs."
... + " That's why I decide not to eat with them." * 300
... )
input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
['healthy', 'skinny', 'thin', 'good', 'vegetarian']
LongformerForSequenceClassification
class transformers.LongformerForSequenceClassification
<
source
>
(
config
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
global_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.longformer.modeling_longformer.LongformerSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.longformer.modeling_longformer.LongformerSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.longformer.modeling_longformer.LongformerSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The LongformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, LongformerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("jpwahle/longformer-base-plagiarism-detection")
model = LongformerForSequenceClassification.from_pretrained("jpwahle/longformer-base-plagiarism-detection")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'ORIGINAL'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = LongformerForSequenceClassification.from_pretrained("jpwahle/longformer-base-plagiarism-detection", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
5.44
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, LongformerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("jpwahle/longformer-base-plagiarism-detection")
model = LongformerForSequenceClassification.from_pretrained("jpwahle/longformer-base-plagiarism-detection", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = LongformerForSequenceClassification.from_pretrained(
... "jpwahle/longformer-base-plagiarism-detection", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
LongformerForMultipleChoice
class transformers.LongformerForMultipleChoice
<
source
>
(
config
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
global_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
global_attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The LongformerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LongformerForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
model = LongformerForMultipleChoice.from_pretrained("allenai/longformer-base-4096")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
LongformerForTokenClassification
class transformers.LongformerForTokenClassification
<
source
>
(
config
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
global_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.longformer.modeling_longformer.LongformerTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.models.longformer.modeling_longformer.LongformerTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.longformer.modeling_longformer.LongformerTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The LongformerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LongformerForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("brad1141/Longformer-finetuned-norm")
model = LongformerForTokenClassification.from_pretrained("brad1141/Longformer-finetuned-norm")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence', 'Evidence']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.63
LongformerForQuestionAnswering
class transformers.LongformerForQuestionAnswering
<
source
>
(
config
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD /
TriviaQA (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
global_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The LongformerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LongformerForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
outputs = model(input_ids, attention_mask=attention_mask)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_logits) : torch.argmax(end_logits) + 1]
answer = tokenizer.decode(
... tokenizer.convert_tokens_to_ids(answer_tokens)
... ) # remove space prepending space token
TFLongformerModel
class transformers.TFLongformerModel
<
source
>
(
*args
**kwargs
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Longformer Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
This class copies code from TFRobertaModel and overwrites standard self-attention with longformer
self-attention to provide the ability to process long sequences following the self-attention approach described in
Longformer: the Long-Document Transformer by Iz Beltagy, Matthew E. Peters, and
Arman Cohan. Longformer self-attention combines a local (sliding window) and global attention to extend to long
documents without the O(n^2) increase in memory and compute.
The self-attention module TFLongformerSelfAttention implemented here supports the combination of local and global
attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and dilated
attention are more relevant for autoregressive language modeling than finetuning on downstream tasks. Future
release will add support for autoregressive attention, but the support for dilated attention requires a custom CUDA
kernel to be memory and compute efficient.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (np.ndarray or tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
global_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
The TFLongformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TFLongformerForMaskedLM
class transformers.TFLongformerForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (np.ndarray or tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
global_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput or tuple(tf.Tensor)
A transformers.models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LongformerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLongformerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLongformerForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
model = TFLongformerForMaskedLM.from_pretrained("allenai/longformer-base-4096")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-<mask> tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.44
TFLongformerForQuestionAnswering
class transformers.TFLongformerForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD /
TriviaQA (a linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (np.ndarray or tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
global_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LongformerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLongformerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLongformerForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
model = TFLongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
' puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
0.96
TFLongformerForSequenceClassification
class transformers.TFLongformerForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (np.ndarray or tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
global_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LongformerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLongformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLongformerForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
model = TFLongformerForSequenceClassification.from_pretrained("allenai/longformer-base-4096")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFLongformerForSequenceClassification.from_pretrained("allenai/longformer-base-4096", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFLongformerForTokenClassification
class transformers.TFLongformerForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: Optional[Union[np.array, tf.Tensor]] = None
training: Optional[bool] = False
)
→
transformers.models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (np.ndarray or tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
global_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput or tuple(tf.Tensor)
A transformers.models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LongformerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLongformerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLongformerForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
model = TFLongformerForTokenClassification.from_pretrained("allenai/longformer-base-4096")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFLongformerForMultipleChoice
class transformers.TFLongformerForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (LongformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (np.ndarray or tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
global_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention. Tokens with global
attention attends to all other tokens, and all other tokens attend to them. This is important for
task-specific finetuning because it makes the model more flexible at representing the task. For example,
for classification, the token should be given global attention. For QA, all question tokens should also
have global attention. Please refer to the Longformer paper for more
details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LongformerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLongformerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLongformerForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096")
model = TFLongformerForMultipleChoice.from_pretrained("allenai/longformer-base-4096")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
←Llama2
LongT5→
Longformer
Overview
Longformer Self Attention
Training
Documentation resources
LongformerConfig
LongformerTokenizer
LongformerTokenizerFast
Longformer specific outputs
LongformerModel
LongformerForMaskedLM
LongformerForSequenceClassification
LongformerForMultipleChoice
LongformerForTokenClassification
LongformerForQuestionAnswering
TFLongformerModel
TFLongformerForMaskedLM
TFLongformerForQuestionAnswering
TFLongformerForSequenceClassification
TFLongformerForTokenClassification
TFLongformerForMultipleChoice
|
AltCLIP
Overview
The AltCLIP model was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP
(Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP’s
text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP’s capabilities such as multilingual understanding.
The abstract from the paper is the following:
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model.
Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained
multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of
teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art
performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with
CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.
Usage
The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention
and we take the [CLS] token in XLM-R to represent text embedding.
AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The CLIPImageProcessor can be used to resize (or rescale) and normalize images for the model.
The AltCLIPProcessor wraps a CLIPImageProcessor and a XLMRobertaTokenizer into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
AltCLIPProcessor and AltCLIPModel.
Copied
from PIL import Image
import requests
from transformers import AltCLIPModel, AltCLIPProcessor
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
Tips:
This model is build on CLIPModel, so use it like a original CLIP.
This model was contributed by jongjyh.
AltCLIPConfig
class transformers.AltCLIPConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 768
logit_scale_init_value = 2.6592
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize AltCLIPTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize AltCLIPVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original CLIP implementation.
kwargs (optional) —
Dictionary of keyword arguments.
This is the configuration class to store the configuration of a AltCLIPModel. It is used to instantiate an
AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the AltCLIP
BAAI/AltCLIP architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import AltCLIPConfig, AltCLIPModel
# Initializing a AltCLIPConfig with BAAI/AltCLIP style configuration
configuration = AltCLIPConfig()
# Initializing a AltCLIPModel (with random weights) from the BAAI/AltCLIP style configuration
model = AltCLIPModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a AltCLIPConfig from a AltCLIPTextConfig and a AltCLIPVisionConfig
# Initializing a AltCLIPText and AltCLIPVision configuration
config_text = AltCLIPTextConfig()
config_vision = AltCLIPVisionConfig()
config = AltCLIPConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: AltCLIPTextConfig
vision_config: AltCLIPVisionConfig
**kwargs
)
→
AltCLIPConfig
Returns
AltCLIPConfig
An instance of a configuration object
Instantiate a AltCLIPConfig (or a derived class) from altclip text model configuration and altclip vision
model configuration.
AltCLIPTextConfig
class transformers.AltCLIPTextConfig
<
source
>
(
vocab_size = 250002
hidden_size = 1024
num_hidden_layers = 24
num_attention_heads = 16
intermediate_size = 4096
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 514
type_vocab_size = 1
initializer_range = 0.02
initializer_factor = 0.02
layer_norm_eps = 1e-05
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
project_dim = 768
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250002) —
Vocabulary size of the AltCLIP model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling AltCLIPTextModel.
hidden_size (int, optional, defaults to 1024) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 514) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling AltCLIPTextModel
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
project_dim (int, optional, defaults to 768) —
The dimentions of the teacher model before the mapping layer.
This is the configuration class to store the configuration of a AltCLIPTextModel. It is used to instantiate a
AltCLIP text model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the AltCLIP
BAAI/AltCLIP architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import AltCLIPTextModel, AltCLIPTextConfig
# Initializing a AltCLIPTextConfig with BAAI/AltCLIP style configuration
configuration = AltCLIPTextConfig()
# Initializing a AltCLIPTextModel (with random weights) from the BAAI/AltCLIP style configuration
model = AltCLIPTextModel(configuration)
# Accessing the model configuration
configuration = model.config
AltCLIPVisionConfig
class transformers.AltCLIPVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
projection_dim = 512
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 224
patch_size = 32
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a AltCLIPModel. It is used to instantiate an
AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the AltCLIP
BAAI/AltCLIP architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import AltCLIPVisionConfig, AltCLIPVisionModel
# Initializing a AltCLIPVisionConfig with BAAI/AltCLIP style configuration
configuration = AltCLIPVisionConfig()
# Initializing a AltCLIPVisionModel (with random weights) from the BAAI/AltCLIP style configuration
model = AltCLIPVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
AltCLIPProcessor
class transformers.AltCLIPProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (CLIPImageProcessor) —
The image processor is a required input.
tokenizer (XLMRobertaTokenizerFast) —
The tokenizer is a required input.
Constructs a AltCLIP processor which wraps a CLIP image processor and a XLM-Roberta tokenizer into a single
processor.
AltCLIPProcessor offers all the functionalities of CLIPImageProcessor and XLMRobertaTokenizerFast. See
the __call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to XLMRobertaTokenizerFast’s batch_decode().
Please refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to XLMRobertaTokenizerFast’s decode(). Please
refer to the docstring of this method for more information.
AltCLIPModel
class transformers.AltCLIPModel
<
source
>
(
config: AltCLIPConfig
)
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
token_type_ids = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.altclip.modeling_altclip.AltCLIPOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.altclip.modeling_altclip.AltCLIPOutput or tuple(torch.FloatTensor)
A transformers.models.altclip.modeling_altclip.AltCLIPOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.altclip.configuration_altclip.AltCLIPConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of AltCLIPTextModel.
image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
AltCLIPVisionModel.
text_model_output(BaseModelOutputWithPooling):
The output of the AltCLIPTextModel.
vision_model_output(BaseModelOutputWithPooling):
The output of the AltCLIPVisionModel.
The AltCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, AltCLIPModel
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
token_type_ids = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of AltCLIPTextModel.
The AltCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AltCLIPModel
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of AltCLIPVisionModel.
The AltCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, AltCLIPModel
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
AltCLIPTextModel
class transformers.AltCLIPTextModel
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndProjection or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndProjection or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndProjection or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.altclip.configuration_altclip.AltCLIPTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
projection_state (tuple(torch.FloatTensor), returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor of shape (batch_size,config.project_dim).
Text embeddings before the projection layer, used to mimic the last hidden state of the teacher encoder.
The AltCLIPTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AltCLIPTextModel
model = AltCLIPTextModel.from_pretrained("BAAI/AltCLIP")
processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
texts = ["it's a cat", "it's a dog"]
inputs = processor(text=texts, padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
AltCLIPVisionModel
class transformers.AltCLIPVisionModel
<
source
>
(
config: AltCLIPVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.altclip.configuration_altclip.AltCLIPVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The AltCLIPVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, AltCLIPVisionModel
model = AltCLIPVisionModel.from_pretrained("BAAI/AltCLIP")
processor = AutoProcessor.from_pretrained("BAAI/AltCLIP")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
←ALIGN
BLIP→
AltCLIP
Overview
Usage
AltCLIPConfig
AltCLIPTextConfig
AltCLIPVisionConfig
AltCLIPProcessor
AltCLIPModel
AltCLIPTextModel
AltCLIPVisionModel
|
VAN
This model is in maintenance mode only, so we won’t accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The VAN model was proposed in Visual Attention Network by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.
The abstract from the paper is the following:
While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at this https URL.
Tips:
VAN does not have an embedding layer, thus the hidden_states will have a length equal to the number of stages.
The figure below illustrates the architecture of a Visual Aattention Layer. Taken from the original paper.
This model was contributed by Francesco. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VAN.
Image Classification
VanForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
VanConfig
class transformers.VanConfig
<
source
>
(
image_size = 224
num_channels = 3
patch_sizes = [7, 3, 3, 3]
strides = [4, 2, 2, 2]
hidden_sizes = [64, 128, 320, 512]
depths = [3, 3, 12, 3]
mlp_ratios = [8, 8, 4, 4]
hidden_act = 'gelu'
initializer_range = 0.02
layer_norm_eps = 1e-06
layer_scale_init_value = 0.01
drop_path_rate = 0.0
dropout_rate = 0.0
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
num_channels (int, optional, defaults to 3) —
The number of input channels.
patch_sizes (List[int], optional, defaults to [7, 3, 3, 3]) —
Patch size to use in each stage’s embedding layer.
strides (List[int], optional, defaults to [4, 2, 2, 2]) —
Stride size to use in each stage’s embedding layer to downsample the input.
hidden_sizes (List[int], optional, defaults to [64, 128, 320, 512]) —
Dimensionality (hidden size) at each stage.
depths (List[int], optional, defaults to [3, 3, 12, 3]) —
Depth (number of layers) for each stage.
mlp_ratios (List[int], optional, defaults to [8, 8, 4, 4]) —
The expansion ratio for mlp layer at each stage.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in each layer. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
layer_scale_init_value (float, optional, defaults to 1e-2) —
The initial value for layer scaling.
drop_path_rate (float, optional, defaults to 0.0) —
The dropout probability for stochastic depth.
dropout_rate (float, optional, defaults to 0.0) —
The dropout probability for dropout.
This is the configuration class to store the configuration of a VanModel. It is used to instantiate a VAN model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the VAN
Visual-Attention-Network/van-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import VanModel, VanConfig
# Initializing a VAN van-base style configuration
configuration = VanConfig()
# Initializing a model from the van-base style configuration
model = VanModel(configuration)
# Accessing the model configuration
configuration = model.config
VanModel
class transformers.VanModel
<
source
>
(
config
)
Parameters
config (VanConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare VAN model outputting raw features without any specific head on top. Note, VAN does not have an embedding layer.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor]
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all stages. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VanConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The VanModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, VanModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("Visual-Attention-Network/van-base")
model = VanModel.from_pretrained("Visual-Attention-Network/van-base")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 512, 7, 7]
VanForImageClassification
class transformers.VanForImageClassification
<
source
>
(
config
)
Parameters
config (VanConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
VAN Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all stages. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (VanConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The VanForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, VanForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("Visual-Attention-Network/van-base")
model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←UperNet
VideoMAE→
VAN
Overview
Resources
VanConfig
VanModel
VanForImageClassification
|
MegatronBERT
Overview
The MegatronBERT model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
Tips:
We have provided pretrained BERT-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
BERT-345M-uncased:
Copied
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
-O megatron_bert_345m_v0_1_uncased.zip
BERT-345M-cased:
Copied
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
megatron_bert_345m_v0_1_cased.zip
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
easily be loaded by Hugging Face Transformers and our port of the BERT code.
The following commands allow you to do the conversion. We assume that the folder models/megatron_bert contains
megatron_bert_345m_v0_1_{cased, uncased}.zip and that the commands are run from inside that folder:
Copied
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
Copied
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
This model was contributed by jdemouth. The original code can be found here. That repository contains a multi-GPU and multi-node implementation of the
Megatron Language models. In particular, it contains a hybrid model parallel approach using “tensor parallel” and
“pipeline parallel” techniques.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
MegatronBertConfig
class transformers.MegatronBertConfig
<
source
>
(
vocab_size = 29056
hidden_size = 1024
num_hidden_layers = 24
num_attention_heads = 16
intermediate_size = 4096
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 29056) —
Vocabulary size of the MEGATRON_BERT model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling MegatronBertModel.
hidden_size (int, optional, defaults to 1024) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling MegatronBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
This is the configuration class to store the configuration of a MegatronBertModel. It is used to instantiate a
MEGATRON_BERT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MEGATRON_BERT
nvidia/megatron-bert-uncased-345m architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import MegatronBertConfig, MegatronBertModel
# Initializing a MEGATRON_BERT bert-base-uncased style configuration
configuration = MegatronBertConfig()
# Initializing a model (with random weights) from the bert-base-uncased style configuration
model = MegatronBertModel(configuration)
# Accessing the model configuration
configuration = model.config
MegatronBertModel
class transformers.MegatronBertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MegatronBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The MegatronBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertModel.from_pretrained("nvidia/megatron-bert-cased-345m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MegatronBertForMaskedLM
class transformers.MegatronBertForMaskedLM
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForMaskedLM.from_pretrained("nvidia/megatron-bert-cased-345m")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
MegatronBertForCausalLM
class transformers.MegatronBertForCausalLM
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The MegatronBertForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForCausalLM, MegatronBertConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForCausalLM.from_pretrained("nvidia/megatron-bert-cased-345m", is_decoder=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
MegatronBertForNextSentencePrediction
class transformers.MegatronBertForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForNextSentencePrediction.from_pretrained("nvidia/megatron-bert-cased-345m")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
MegatronBertForPreTraining
class transformers.MegatronBertForPreTraining
<
source
>
(
config
add_binary_head = True
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with two heads on top as done during the pretraining: a masked language modeling head and a
next sentence prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
next_sentence_label: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.megatron_bert.modeling_megatron_bert.MegatronBertForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.megatron_bert.modeling_megatron_bert.MegatronBertForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.megatron_bert.modeling_megatron_bert.MegatronBertForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForPreTraining.from_pretrained("nvidia/megatron-bert-cased-345m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
MegatronBertForSequenceClassification
class transformers.MegatronBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, MegatronBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForSequenceClassification.from_pretrained("nvidia/megatron-bert-cased-345m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MegatronBertForSequenceClassification.from_pretrained("nvidia/megatron-bert-cased-345m", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, MegatronBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForSequenceClassification.from_pretrained("nvidia/megatron-bert-cased-345m", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MegatronBertForSequenceClassification.from_pretrained(
... "nvidia/megatron-bert-cased-345m", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
MegatronBertForMultipleChoice
class transformers.MegatronBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output
and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForMultipleChoice.from_pretrained("nvidia/megatron-bert-cased-345m")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
MegatronBertForTokenClassification
class transformers.MegatronBertForTokenClassification
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForTokenClassification.from_pretrained("nvidia/megatron-bert-cased-345m")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
MegatronBertForQuestionAnswering
class transformers.MegatronBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MegatronBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MegatronBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MegatronBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MegatronBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MegatronBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("nvidia/megatron-bert-cased-345m")
model = MegatronBertForQuestionAnswering.from_pretrained("nvidia/megatron-bert-cased-345m")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←MEGA
MegatronGPT2→
MegatronBERT
Overview
Documentation resources
MegatronBertConfig
MegatronBertModel
MegatronBertForMaskedLM
MegatronBertForCausalLM
MegatronBertForNextSentencePrediction
MegatronBertForPreTraining
MegatronBertForSequenceClassification
MegatronBertForMultipleChoice
MegatronBertForTokenClassification
MegatronBertForQuestionAnswering
|
Pegasus
DISCLAIMER: If you see something strange, file a Github Issue
and assign @patrickvonplaten.
Overview
The Pegasus model was proposed in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
According to the abstract,
Pegasus’ pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
input document and are generated together as one output sequence from the remaining sentences, similar to an
extractive summary.
Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
This model was contributed by sshleifer. The Authors’ code can be found here.
Tips:
Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG).
MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT)
GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder.
Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned:
Each checkpoint is 2.2 GB on disk and 568M parameters.
FP16 is not supported (help/ideas on this appreciated!).
Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
Full replication results and correctly pre-processed data can be found in this Issue.
Distilled checkpoints are described in this paper.
Examples
Script to fine-tune pegasus
on the XSUM dataset. Data download instructions at examples/pytorch/summarization/.
FP16 is not supported (help/ideas on this appreciated!).
The adafactor optimizer is recommended for pegasus fine-tuning.
Implementation Notes
All models are transformer encoder-decoders with 16 layers in each component.
The implementation is completely inherited from BartForConditionalGeneration
Some key configuration differences:
static, sinusoidal position embeddings
the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
more beams are used (num_beams=8)
All pretrained pegasus checkpoints are the same besides three attributes: tokenizer.model_max_length (maximum
input size), max_length (the maximum number of tokens to generate) and length_penalty.
The code to convert checkpoints trained in the author’s repo can be
found in convert_pegasus_tf_to_pytorch.py.
Usage Example
Copied
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
... """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
... ]
... model_name = "google/pegasus-xsum"
... device = "cuda" if torch.cuda.is_available() else "cpu"
... tokenizer = PegasusTokenizer.from_pretrained(model_name)
... model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
... batch = tokenizer(src_text, truncation=True, padding="longest", return_tensors="pt").to(device)
... translated = model.generate(**batch)
... tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
... assert (
... tgt_text[0]
... == "California's largest electricity provider has turned off power to hundreds of thousands of customers."
... )
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
PegasusConfig
class transformers.PegasusConfig
<
source
>
(
vocab_size = 50265
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 0
scale_embedding = False
pad_token_id = 0
eos_token_id = 1
forced_eos_token_id = 1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the PEGASUS model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling PegasusModel or TFPegasusModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 1) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a PegasusModel. It is used to instantiate an
PEGASUS model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the PEGASUS
google/pegasus-large architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import PegasusConfig, PegasusModel
# Initializing a PEGASUS google/pegasus-large style configuration
configuration = PegasusConfig()
# Initializing a model (with random weights) from the google/pegasus-large style configuration
model = PegasusModel(configuration)
# Accessing the model configuration
configuration = model.config
PegasusTokenizer
warning: add_tokens does not work at the moment.
class transformers.PegasusTokenizer
<
source
>
(
vocab_file
pad_token = '<pad>'
eos_token = '</s>'
unk_token = '<unk>'
mask_token = '<mask_2>'
mask_token_sent = '<mask_1>'
additional_special_tokens = None
offset = 103
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
mask_token (str, optional, defaults to "<mask_2>") —
The token used for masking single token values. This is the token used when training this model with masked
language modeling (MLM). This is the token that the PEGASUS encoder will try to predict during pretraining.
It corresponds to [MASK2] in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
Summarization.
mask_token_sent (str, optional, defaults to "<mask_1>") —
The token used for masking whole target sentences. This is the token used when training this model with gap
sentences generation (GSG). This is the sentence that the PEGASUS decoder will try to predict during
pretraining. It corresponds to [MASK1] in PEGASUS: Pre-training with Extracted Gap-sentences for
Abstractive Summarization.
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer. If no additional_special_tokens are provided and
are used as additional special tokens corresponding to the original PEGASUS
tokenizer
that uses the tokens 2 - 104 only for pretraining
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct a PEGASUS tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequences for sequence classification tasks by concatenating
and adding special tokens. A PEGASUS sequence has the following format, where X represents the sequence:
single sequence: X </s>
pair of sequences: A B </s> (not intended use)
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List
token_ids_1: typing.Optional[typing.List] = None
already_has_special_tokens: bool = False
)
Get list where entries are [1] if a token is [eos] or [pad] else 0.
num_special_tokens_to_add
<
source
>
(
pair = False
)
Just EOS
PegasusTokenizerFast
class transformers.PegasusTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
pad_token = '<pad>'
eos_token = '</s>'
unk_token = '<unk>'
mask_token = '<mask_2>'
mask_token_sent = '<mask_1>'
additional_special_tokens = None
offset = 103
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
mask_token (str, optional, defaults to "<mask_2>") —
The token used for masking single token values. This is the token used when training this model with masked
language modeling (MLM). This is the token that the PEGASUS encoder will try to predict during pretraining.
It corresponds to [MASK2] in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
Summarization.
mask_token_sent (str, optional, defaults to "<mask_1>") —
The token used for masking whole target sentences. This is the token used when training this model with gap
sentences generation (GSG). This is the sentence that the PEGASUS decoder will try to predict during
pretraining. It corresponds to [MASK1] in PEGASUS: Pre-training with Extracted Gap-sentences for
Abstractive Summarization.
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer. If no additional_special_tokens are provided and
are used as additional special tokens corresponding to the original PEGASUS
tokenizer
that uses the tokens 2 - 104 only for pretraining
Construct a “fast” PEGASUS tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence by adding eos to the end. no bos token is added to the front.
single sequence: X </s>
pair of sequences: A B </s> (not intended use)
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List
token_ids_1: typing.Optional[typing.List] = None
already_has_special_tokens: bool = False
)
Get list where entries are [1] if a token is [eos] or [pad] else 0.
PegasusModel
class transformers.PegasusModel
<
source
>
(
config: PegasusConfig
)
Parameters
config (PegasusConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare PEGASUS Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Pegasus uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PegasusModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, PegasusModel
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
model = PegasusModel.from_pretrained("google/pegasus-large")
inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
decoder_inputs = tokenizer("Studies show that", return_tensors="pt")
outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 4, 1024]
PegasusForConditionalGeneration
class transformers.PegasusForConditionalGeneration
<
source
>
(
config: PegasusConfig
)
Parameters
config (PegasusConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The PEGASUS Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Pegasus uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The PegasusForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, PegasusForConditionalGeneration
model = PegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-xsum")
ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
inputs = tokenizer(ARTICLE_TO_SUMMARIZE, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"California's largest electricity provider has turned off power to hundreds of thousands of customers."
PegasusForCausalLM
class transformers.PegasusForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, PegasusForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
model = PegasusForCausalLM.from_pretrained("google/pegasus-large", add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
TFPegasusModel
class transformers.TFPegasusModel
<
source
>
(
*args
**kwargs
)
Parameters
config (PegasusConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare PEGASUS Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Pegasus uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation output_attentions (bool,
optional): Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the
value in the config will be used instead.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (PegasusConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFPegasusModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFPegasusModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
model = TFPegasusModel.from_pretrained("google/pegasus-large")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFPegasusForConditionalGeneration
class transformers.TFPegasusForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (PegasusConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The PEGASUS Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[TFBaseModelOutput] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Pegasus uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation output_attentions (bool,
optional): Whether or not to return the attentions tensors of all attention layers. See attentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the
value in the config will be used instead.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (PegasusConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFPegasusForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, TFPegasusForConditionalGeneration
model = TFPegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-xsum")
ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
inputs = tokenizer(ARTICLE_TO_SUMMARIZE, max_length=1024, return_tensors="tf")
# Generate Summary
summary_ids = model.generate(input_ids)
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
FlaxPegasusModel
class transformers.FlaxPegasusModel
<
source
>
(
config: PegasusConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (PegasusConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Pegasus Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxPegasusPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxPegasusModel
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
model = FlaxPegasusModel.from_pretrained("google/pegasus-large")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.pegasus.configuration_pegasus.PegasusConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxPegasusForConditionalGeneration
model = FlaxPegasusForConditionalGeneration.from_pretrained("google/pegasus-large")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.pegasus.configuration_pegasus.PegasusConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxPegasusForConditionalGeneration
model = FlaxPegasusForConditionalGeneration.from_pretrained("google/pegasus-large")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxPegasusForConditionalGeneration
class transformers.FlaxPegasusForConditionalGeneration
<
source
>
(
config: PegasusConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (PegasusConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The PEGASUS Model with a language modeling head. Can be used for summarization.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (PegasusConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxPegasusPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, FlaxPegasusForConditionalGeneration
model = FlaxPegasusForConditionalGeneration.from_pretrained('google/pegasus-large')
tokenizer = AutoTokenizer.from_pretrained('google/pegasus-large')
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='np')
# Generate Summary
summary_ids = model.generate(inputs['input_ids']).sequences
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
Copied
from transformers import AutoTokenizer, FlaxPegasusForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
TXT = "My friends are <mask> but they eat too many carbs."
model = FlaxPegasusForConditionalGeneration.from_pretrained("google/pegasus-large")
input_ids = tokenizer([TXT], return_tensors="np")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = jax.nn.softmax(logits[0, masked_index], axis=0)
values, predictions = jax.lax.top_k(probs)
tokenizer.decode(predictions).split()
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.pegasus.configuration_pegasus.PegasusConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxPegasusForConditionalGeneration
model = FlaxPegasusForConditionalGeneration.from_pretrained("google/pegasus-large")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
deterministic: bool = True
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.pegasus.configuration_pegasus.PegasusConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxPegasusForConditionalGeneration
model = FlaxPegasusForConditionalGeneration.from_pretrained("google/pegasus-large")
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-large")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
←OPT
PEGASUS-X→
Pegasus
Overview
Checkpoints
Examples
Implementation Notes
Usage Example
Documentation resources
PegasusConfig
PegasusTokenizer
PegasusTokenizerFast
PegasusModel
PegasusForConditionalGeneration
PegasusForCausalLM
TFPegasusModel
TFPegasusForConditionalGeneration
FlaxPegasusModel
FlaxPegasusForConditionalGeneration
|
MVP
Overview
The MVP model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
According to the abstract,
MVP follows a standard Transformer encoder-decoder architecture.
MVP is supervised pre-trained using labeled datasets.
MVP also has task-specific soft prompts to stimulate the model’s capacity in performing a certain task.
MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.
Tips:
We have released a series of models here, including MVP, MVP with task-specific prompts, and multi-task pre-trained variants.
If you want to use a model without prompts (standard Transformer), you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp').
If you want to use a model with task-specific prompts, such as summarization, you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp-summarization').
Our model supports lightweight prompt tuning following Prefix-tuning with method set_lightweight_tuning().
This model was contributed by Tianyi Tang. The detailed information and instructions can be found here.
Examples
For summarization, it is an example to use MVP and MVP with summarization-specific prompts.
Copied
from transformers import MvpTokenizer, MvpForConditionalGeneration
tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_prompt = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]
generated_ids = model_with_prompt.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants.
Copied
from transformers import MvpTokenizerFast, MvpForConditionalGeneration
tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_mtl = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic']
generated_ids = model_with_mtl.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
For lightweight tuning, i.e., fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the original paper.
Copied
from transformers import MvpForConditionalGeneration
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp", use_prompt=True)
# the number of trainable parameters (full tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
468116832
# lightweight tuning with randomly initialized prompts
model.set_lightweight_tuning()
# the number of trainable parameters (lightweight tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
61823328
# lightweight tuning with task-specific prompts
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
model.set_lightweight_tuning()
# original lightweight Prefix-tuning
model = MvpForConditionalGeneration.from_pretrained("facebook/bart-large", use_prompt=True)
model.set_lightweight_tuning()
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MvpConfig
class transformers.MvpConfig
<
source
>
(
vocab_size = 50267
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
classifier_dropout = 0.0
scale_embedding = False
use_cache = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
is_encoder_decoder = True
decoder_start_token_id = 2
forced_eos_token_id = 2
use_prompt = False
prompt_length = 100
prompt_mid_dim = 800
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50267) —
Vocabulary size of the MVP model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MvpModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
forced_eos_token_id (int, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
use_prompt (bool, optional, defaults to False) —
Whether or not to use prompt.
prompt_length (int, optional, defaults to 100) —
The length of prompt.
prompt_mid_dim (int, optional, defaults to 800) —
Dimensionality of the “intermediate” layer in prompt.
This is the configuration class to store the configuration of a MvpModel. It is used to instantiate a MVP model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MVP RUCAIBox/mvp
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MvpConfig, MvpModel
# Initializing a MVP RUCAIBox/mvp style configuration
configuration = MvpConfig()
# Initializing a model (with random weights) from the RUCAIBox/mvp style configuration
model = MvpModel(configuration)
# Accessing the model configuration
configuration = model.config
MvpTokenizer
class transformers.MvpTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (MVP tokenizer detect beginning of words by the preceding space).
Constructs a MVP tokenizer, which is smilar to the RoBERTa tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import MvpTokenizer
tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A MVP sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. MVP does not
make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
MvpTokenizerFast
class transformers.MvpTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (MVP tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” MVP tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2 tokenizer,
using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import MvpTokenizerFast
tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. MVP does not
make use of token type ids, therefore a list of zeros is returned.
MvpModel
class transformers.MvpModel
<
source
>
(
config: MvpConfig
)
Parameters
config (MvpConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare MVP Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Mvp uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_mvp._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MvpConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MvpModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MvpModel
import torch
tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpModel.from_pretrained("RUCAIBox/mvp")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MvpForConditionalGeneration
class transformers.MvpForConditionalGeneration
<
source
>
(
config: MvpConfig
)
Parameters
config (MvpConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The MVP Model with a language modeling head. Can be used for various text generation tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Mvp uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_mvp._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MvpConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MvpForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of summarization:
Fine-tuning a model
Copied
import torch
from transformers import AutoTokenizer, MvpForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
labels = tokenizer("Bad Reasons To Quit Your Job", return_tensors="pt")["input_ids"]
loss = model(**inputs, labels=labels).loss
loss.backward()
Inference after the model fine-tuned
Copied
with torch.no_grad():
... generated_ids = model.generate(**inputs)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
MvpForSequenceClassification
class transformers.MvpForSequenceClassification
<
source
>
(
config: MvpConfig
**kwargs
)
Parameters
config (MvpConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Mvp model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Mvp uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_mvp._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The MvpForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Fine-tuning a model on num_labels classes
Copied
import torch
from transformers import AutoTokenizer, MvpForSequenceClassification
num_labels = 2 # for example, this is a binary classification task
tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForSequenceClassification.from_pretrained("RUCAIBox/mvp", num_labels=num_labels)
inputs = tokenizer("Classify: Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor(1) # the real label for inputs
loss = model(**inputs, labels=labels).loss
loss.backward()
Inference after the model fine-tuned
Copied
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax()
MvpForQuestionAnswering
class transformers.MvpForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MvpConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
MVP Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer
on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: Tensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Mvp uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_mvp._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
The MvpForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Fine-tuning a model for extrative question answering, and our model also supports generative question answering
using BartForConditionalGeneration
Copied
import torch
from transformers import AutoTokenizer, MvpForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForQuestionAnswering.from_pretrained("RUCAIBox/mvp")
inputs = tokenizer(
... "Answer the following question: Who was Jim Henson? [SEP] Jim Henson was a nice puppet",
... return_tensors="pt",
... )
target_start_index = torch.tensor([18])
target_end_index = torch.tensor([19])
loss = model(**inputs, start_positions=target_start_index, end_positions=target_end_index).loss
loss.backward()
Inference after the model fine-tuned
Copied
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
predict_answer = tokenizer.decode(predict_answer_tokens)
MvpForCausalLM
class transformers.MvpForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MvpConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, MvpForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForCausalLM.from_pretrained("RUCAIBox/mvp", add_cross_attention=False)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
list(logits.shape)
[1, 8, 50267]
←MT5
NEZHA→
MVP
Overview
Examples
Documentation resources
MvpConfig
MvpTokenizer
MvpTokenizerFast
MvpModel
MvpForConditionalGeneration
MvpForSequenceClassification
MvpForQuestionAnswering
MvpForCausalLM
|
MobileViT
Overview
The MobileViT model was proposed in MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers.
The abstract from the paper is the following:
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
Tips:
MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow this tutorial for a lightweight introduction.
One can use MobileViTImageProcessor to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with TensorFlow Lite.
You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a
TensorFlow Lite model:
Copied
from transformers import TFMobileViTForImageClassification
import tensorflow as tf
model_ckpt = "apple/mobilevit-xx-small"
model = TFMobileViTForImageClassification.from_pretrained(model_ckpt)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
tflite_model = converter.convert()
tflite_filename = model_ckpt.split("/")[-1] + ".tflite"
with open(tflite_filename, "wb") as f:
f.write(tflite_model)
The resulting model will be just about an MB making it a good fit for mobile applications where resources and network
bandwidth can be constrained.
This model was contributed by matthijs. The TensorFlow version of the model was contributed by sayakpaul. The original code and weights can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileViT.
Image Classification
MobileViTForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Semantic segmentation
Semantic segmentation task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileViTConfig
class transformers.MobileViTConfig
<
source
>
(
num_channels = 3
image_size = 256
patch_size = 2
hidden_sizes = [144, 192, 240]
neck_hidden_sizes = [16, 32, 64, 96, 128, 160, 640]
num_attention_heads = 4
mlp_ratio = 2.0
expand_ratio = 4.0
hidden_act = 'silu'
conv_kernel_size = 3
output_stride = 32
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.0
classifier_dropout_prob = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
qkv_bias = True
aspp_out_channels = 256
atrous_rates = [6, 12, 18]
aspp_dropout_prob = 0.1
semantic_loss_ignore_index = 255
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
image_size (int, optional, defaults to 256) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 2) —
The size (resolution) of each patch.
hidden_sizes (List[int], optional, defaults to [144, 192, 240]) —
Dimensionality (hidden size) of the Transformer encoders at each stage.
neck_hidden_sizes (List[int], optional, defaults to [16, 32, 64, 96, 128, 160, 640]) —
The number of channels for the feature maps of the backbone.
num_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder.
mlp_ratio (float, optional, defaults to 2.0) —
The ratio of the number of channels in the output of the MLP to the number of channels in the input.
expand_ratio (float, optional, defaults to 4.0) —
Expansion factor for the MobileNetv2 layers.
hidden_act (str or function, optional, defaults to "silu") —
The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
conv_kernel_size (int, optional, defaults to 3) —
The size of the convolutional kernel in the MobileViT layer.
output_stride (int, optional, defaults to 32) —
The ratio of the spatial resolution of the output to the resolution of the input image.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the Transformer encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
classifier_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for attached classifiers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
aspp_out_channels (int, optional, defaults to 256) —
Number of output channels used in the ASPP layer for semantic segmentation.
atrous_rates (List[int], optional, defaults to [6, 12, 18]) —
Dilation (atrous) factors used in the ASPP layer for semantic segmentation.
aspp_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the ASPP layer for semantic segmentation.
semantic_loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a MobileViTModel. It is used to instantiate a
MobileViT model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileViT
apple/mobilevit-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MobileViTConfig, MobileViTModel
# Initializing a mobilevit-small style configuration
configuration = MobileViTConfig()
# Initializing a model from the mobilevit-small style configuration
model = MobileViTModel(configuration)
# Accessing the model configuration
configuration = model.config
MobileViTFeatureExtractor
class transformers.MobileViTFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
List[torch.Tensor]
Parameters
outputs (MobileViTForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple], optional) —
A list of length batch_size, where each item is a Tuple[int, int] corresponding to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MobileViTForSemanticSegmentation into semantic segmentation maps. Only supports
PyTorch.
MobileViTImageProcessor
class transformers.MobileViTImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_flip_channel_order: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Controls the size of the output image after resizing. Can be overridden by the size parameter in the
preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Defines the resampling filter to use if resizing the image. Can be overridden by the resample parameter
in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to crop the input at the center. If the input size is smaller than crop_size along any edge, the
image is padded with 0’s and then center cropped. Can be overridden by the do_center_crop parameter in
the preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 256, "width": 256}):
Desired output size (size["height"], size["width"]) when applying center-cropping. Can be overridden by
the crop_size parameter in the preprocess method.
do_flip_channel_order (bool, optional, defaults to True) —
Whether to flip the color channels from RGB to BGR. Can be overridden by the do_flip_channel_order
parameter in the preprocess method.
Constructs a MobileViT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_rescale: bool = None
rescale_factor: float = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_flip_channel_order: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image by rescale factor.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop if do_center_crop is set to True.
do_flip_channel_order (bool, optional, defaults to self.do_flip_channel_order) —
Whether to flip the channel order of the image.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
List[torch.Tensor]
Parameters
outputs (MobileViTForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple], optional) —
A list of length batch_size, where each item is a Tuple[int, int] corresponding to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MobileViTForSemanticSegmentation into semantic segmentation maps. Only supports
PyTorch.
MobileViTModel
class transformers.MobileViTModel
<
source
>
(
config: MobileViTConfig
expand_output: bool = True
)
Parameters
config (MobileViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileViT model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileViTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The MobileViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileViTModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevit-small")
model = MobileViTModel.from_pretrained("apple/mobilevit-small")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 640, 8, 8]
MobileViTForImageClassification
class transformers.MobileViTForImageClassification
<
source
>
(
config: MobileViTConfig
)
Parameters
config (MobileViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileViT model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss). If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileViTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The MobileViTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, MobileViTForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevit-small")
model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-small")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
MobileViTForSemanticSegmentation
class transformers.MobileViTForSemanticSegmentation
<
source
>
(
config: MobileViTConfig
)
Parameters
config (MobileViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileViT model with a semantic segmentation head on top, e.g. for Pascal VOC.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MobileViTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MobileViTForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, MobileViTForSemanticSegmentation
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("apple/deeplabv3-mobilevit-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
TFMobileViTModel
class transformers.TFMobileViTModel
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MobileViT model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileViTConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMobileViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFMobileViTModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevit-small")
model = TFMobileViTModel.from_pretrained("apple/mobilevit-small")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 640, 8, 8]
TFMobileViTForImageClassification
class transformers.TFMobileViTForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileViT model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
labels: tf.Tensor | None = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss). If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileViTConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called
feature maps) of the model at the output of each stage.
The TFMobileViTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFMobileViTForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("apple/mobilevit-small")
model = TFMobileViTForImageClassification.from_pretrained("apple/mobilevit-small")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFMobileViTForSemanticSegmentation
class transformers.TFMobileViTForSemanticSegmentation
<
source
>
(
*args
**kwargs
)
Parameters
config (MobileViTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MobileViT model with a semantic segmentation head on top, e.g. for Pascal VOC.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSemanticSegmenterOutputWithNoAttention or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MobileViTImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
labels (tf.Tensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSemanticSegmenterOutputWithNoAttention or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSemanticSegmenterOutputWithNoAttention or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MobileViTConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The TFMobileViTForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFMobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("apple/deeplabv3-mobilevit-small")
model = TFMobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
←MobileNetV2
MobileViTV2→
MobileViT
Overview
Resources
MobileViTConfig
MobileViTFeatureExtractor
MobileViTImageProcessor
MobileViTModel
MobileViTForImageClassification
MobileViTForSemanticSegmentation
TFMobileViTModel
TFMobileViTForImageClassification
TFMobileViTForSemanticSegmentation
|
OpenAI GPT
Overview
OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training
by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It’s a causal (unidirectional) transformer
pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus.
The abstract from the paper is the following:
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering,
semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant,
labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to
perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a
language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In
contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve
effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our
approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms
discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon
the state of the art in 9 out of the 12 tasks studied.
Tips:
GPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
observed in the run_generation.py example script.
Write With Transformer is a webapp created and hosted by Hugging Face
showcasing the generative capabilities of several models. GPT is one of them.
This model was contributed by thomwolf. The original code can be found here.
Note:
If you want to reproduce the original tokenization process of the OpenAI GPT paper, you will need to install ftfy
and SpaCy:
Copied
pip install spacy ftfy==4.4.3
python -m spacy download en
If you don’t install ftfy and SpaCy, the OpenAIGPTTokenizer will default to tokenize
using BERT’s BasicTokenizer followed by Byte-Pair Encoding (which should be fine for most usage, don’t worry).
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OpenAI GPT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on outperforming OpenAI GPT-3 with SetFit for text-classification.
See also: Text classification task guide
Text Generation
A blog on how to Finetune a non-English GPT-2 Model with Hugging Face.
A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2.
A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model.
A blog on Faster Text Generation with TensorFlow and XLA with GPT-2.
A blog on How to train a Language Model with Megatron-LM with a GPT-2 model.
A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎
A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎
Causal language modeling chapter of the 🤗 Hugging Face Course.
OpenAIGPTLMHeadModel is supported by this causal language modeling example script, text generation example script and notebook.
TFOpenAIGPTLMHeadModel is supported by this causal language modeling example script and notebook.
See also: Causal language modeling task guide
Token Classification
A course material on Byte-Pair Encoding tokenization.
OpenAIGPTConfig
class transformers.OpenAIGPTConfig
<
source
>
(
vocab_size = 40478
n_positions = 512
n_embd = 768
n_layer = 12
n_head = 12
afn = 'gelu'
resid_pdrop = 0.1
embd_pdrop = 0.1
attn_pdrop = 0.1
layer_norm_epsilon = 1e-05
initializer_range = 0.02
summary_type = 'cls_index'
summary_use_proj = True
summary_activation = None
summary_proj_to_labels = True
summary_first_dropout = 0.1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 40478) —
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling OpenAIGPTModel or TFOpenAIGPTModel.
n_positions (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 768) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
afn (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (int, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
summary_type (str, optional, defaults to "cls_index") —
Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and
OpenAIGPTDoubleHeadsModel.
Has to be one of the following options:
"last": Take the last token hidden state (like XLNet).
"first": Take the first token hidden state (like BERT).
"mean": Take the mean of all tokens hidden states.
"cls_index": Supply a Tensor of classification token position (like GPT/GPT-2).
"attn": Not implemented now, use multi-head attention.
summary_use_proj (bool, optional, defaults to True) —
Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and
OpenAIGPTDoubleHeadsModel.
Whether or not to add a projection after the vector extraction.
summary_activation (str, optional) —
Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and
OpenAIGPTDoubleHeadsModel.
Pass "tanh" for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (bool, optional, defaults to True) —
Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and
OpenAIGPTDoubleHeadsModel.
Whether the projection outputs should have config.num_labels or config.hidden_size classes.
summary_first_dropout (float, optional, defaults to 0.1) —
Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and
OpenAIGPTDoubleHeadsModel.
The dropout ratio to be used after the projection and activation.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a OpenAIGPTModel or a TFOpenAIGPTModel. It is
used to instantiate a GPT model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT
openai-gpt architecture from OpenAI.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import OpenAIGPTConfig, OpenAIGPTModel
# Initializing a GPT configuration
configuration = OpenAIGPTConfig()
# Initializing a model (with random weights) from the configuration
model = OpenAIGPTModel(configuration)
# Accessing the model configuration
configuration = model.config
OpenAIGPTTokenizer
class transformers.OpenAIGPTTokenizer
<
source
>
(
vocab_file
merges_file
unk_token = '<unk>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Construct a GPT Tokenizer. Based on Byte-Pair-Encoding with the following peculiarities:
lowercases all inputs,
uses SpaCy tokenizer and ftfy for pre-BPE tokenization if they are installed, fallback to BERT’s
BasicTokenizer if not.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
OpenAIGPTTokenizerFast
class transformers.OpenAIGPTTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
unk_token = '<unk>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Construct a “fast” GPT Tokenizer (backed by HuggingFace’s tokenizers library). Based on Byte-Pair-Encoding with
the following peculiarities:
lower case all inputs
uses BERT’s BasicTokenizer for pre-BPE tokenization
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
OpenAI specific outputs
class transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
mc_loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
mc_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) —
Multiple choice classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) —
Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for outputs of models predicting if two sentences are consecutive or not.
class transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput
<
source
>
(
logits: tf.Tensor = None
mc_logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (tf.Tensor of shape (batch_size, num_choices)) —
Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for outputs of models predicting if two sentences are consecutive or not.
OpenAIGPTModel
class transformers.OpenAIGPTModel
<
source
>
(
config
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OpenAIGPTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OpenAIGPTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, OpenAIGPTModel
import torch
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = OpenAIGPTModel.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
OpenAIGPTLMHeadModel
class transformers.OpenAIGPTLMHeadModel
<
source
>
(
config
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OpenAIGPTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OpenAIGPTLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, OpenAIGPTLMHeadModel
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = OpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
OpenAIGPTDoubleHeadsModel
class transformers.OpenAIGPTDoubleHeadsModel
<
source
>
(
config
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for
RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the
input embeddings, the classification head takes as input the input of a specified classification token index in the
input sequence).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
mc_token_ids: typing.Optional[torch.LongTensor] = None
labels: typing.Optional[torch.LongTensor] = None
mc_labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
mc_token_ids (torch.LongTensor of shape (batch_size, num_choices), optional, default to index of the last token of the input) —
Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1].
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-1, 0, ..., config.vocab_size] All labels set to -100 are
ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
mc_labels (torch.LongTensor of shape (batch_size), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (see input_ids above)
Returns
transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput or tuple(torch.FloatTensor)
A transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OpenAIGPTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) — Multiple choice classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OpenAIGPTDoubleHeadsModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, OpenAIGPTDoubleHeadsModel
import torch
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = OpenAIGPTDoubleHeadsModel.from_pretrained("openai-gpt")
tokenizer.add_special_tokens(
... {"cls_token": "[CLS]"}
... ) # Add a [CLS] to the vocabulary (we should train it also!)
model.resize_token_embeddings(len(tokenizer))
choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0) # Batch size 1, 2 choices
mc_token_ids = torch.tensor([input_ids.size(-1) - 1, input_ids.size(-1) - 1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_logits = outputs.logits
mc_logits = outputs.mc_logits
OpenAIGPTForSequenceClassification
class transformers.OpenAIGPTForSequenceClassification
<
source
>
(
config
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Original OpenAI GPT Model transformer with a sequence classification head on top (linear layer).
OpenAIGPTForSequenceClassification uses the last token in order to do the classification, as other causal
models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the
last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding
token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since
it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take
the last value in each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OpenAIGPTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OpenAIGPTForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, OpenAIGPTForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = OpenAIGPTForSequenceClassification.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = OpenAIGPTForSequenceClassification.from_pretrained("openai-gpt", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, OpenAIGPTForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = OpenAIGPTForSequenceClassification.from_pretrained("openai-gpt", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = OpenAIGPTForSequenceClassification.from_pretrained(
... "openai-gpt", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
TFOpenAIGPTModel
class transformers.TFOpenAIGPTModel
<
source
>
(
*args
**kwargs
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (OpenAIGPTConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFOpenAIGPTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFOpenAIGPTModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = TFOpenAIGPTModel.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFOpenAIGPTLMHeadModel
class transformers.TFOpenAIGPTLMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (OpenAIGPTConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFOpenAIGPTLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFOpenAIGPTLMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFOpenAIGPTDoubleHeadsModel
class transformers.TFOpenAIGPTDoubleHeadsModel
<
source
>
(
*args
**kwargs
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for
RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the
input embeddings, the classification head takes as input the input of a specified classification token index in the
input sequence).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
mc_token_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
mc_token_ids (tf.Tensor or Numpy array of shape (batch_size, num_choices), optional, default to index of the last token of the input) —
Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1].
Returns
transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput or tuple(tf.Tensor)
A transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (OpenAIGPTConfig) and inputs.
logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
mc_logits (tf.Tensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFOpenAIGPTDoubleHeadsModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFOpenAIGPTDoubleHeadsModel
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = TFOpenAIGPTDoubleHeadsModel.from_pretrained("openai-gpt")
# Add a [CLS] to the vocabulary (we should train it also!)
tokenizer.add_special_tokens({"cls_token": "[CLS]"})
model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size
print(tokenizer.cls_token_id, len(tokenizer)) # The newly token the last token of the vocabulary
choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
encoding = tokenizer(choices, return_tensors="tf")
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
inputs["mc_token_ids"] = tf.constant(
... [inputs["input_ids"].shape[-1] - 1, inputs["input_ids"].shape[-1] - 1]
... )[
... None, :
... ] # Batch size 1
outputs = model(inputs)
lm_prediction_scores, mc_prediction_scores = outputs[:2]
TFOpenAIGPTForSequenceClassification
class transformers.TFOpenAIGPTForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The OpenAI GPT Model transformer with a sequence classification head on top (linear layer).
TFOpenAIGPTForSequenceClassification uses the last token in order to do the classification, as other causal
models (e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (OpenAIGPTConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFOpenAIGPTForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFOpenAIGPTForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("openai-gpt")
model = TFOpenAIGPTForSequenceClassification.from_pretrained("openai-gpt")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFOpenAIGPTForSequenceClassification.from_pretrained("openai-gpt", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
←Funnel Transformer
GPT Neo→
OpenAI GPT
Overview
Resources
OpenAIGPTConfig
OpenAIGPTTokenizer
OpenAIGPTTokenizerFast
OpenAI specific outputs
OpenAIGPTModel
OpenAIGPTLMHeadModel
OpenAIGPTDoubleHeadsModel
OpenAIGPTForSequenceClassification
TFOpenAIGPTModel
TFOpenAIGPTLMHeadModel
TFOpenAIGPTDoubleHeadsModel
TFOpenAIGPTForSequenceClassification
|
Dilated Neighborhood Attention Transformer
Overview
DiNAT was proposed in Dilated Neighborhood Attention Transformer
by Ali Hassani and Humphrey Shi.
It extends NAT by adding a Dilated Neighborhood Attention pattern to capture global context,
and shows significant performance improvements over it.
The abstract from the paper is the following:
Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities,
domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have
also gained significant attention, thanks to their performance and easy integration into existing frameworks.
These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA)
or Swin Transformer’s Shifted Window Self Attention. While effective at reducing self attention’s quadratic complexity,
local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling,
and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and
efficient extension to NA that can capture more global context and expand receptive fields exponentially at no
additional cost. NA’s local attention and DiNA’s sparse global attention complement each other, and therefore we
introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both.
DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt.
Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection,
1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation.
Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ)
and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data).
It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU),
and ranks second on Cityscapes (84.5 mIoU) (no extra data).
Tips:
One can use the AutoImageProcessor API to prepare images for the model.
DiNAT can be used as a backbone. When output_hidden_states = True,
it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels).
Notes:
DiNAT depends on NATTEN’s implementation of Neighborhood Attention and Dilated Neighborhood Attention.
You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten.
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
Patch size of 4 is only supported at the moment.
Neighborhood Attention with different dilation values.
Taken from the original paper.
This model was contributed by Ali Hassani.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiNAT.
Image Classification
DinatForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DinatConfig
class transformers.DinatConfig
<
source
>
(
patch_size = 4
num_channels = 3
embed_dim = 64
depths = [3, 4, 6, 5]
num_heads = [2, 4, 8, 16]
kernel_size = 7
dilations = [[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]]
mlp_ratio = 3.0
qkv_bias = True
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
drop_path_rate = 0.1
hidden_act = 'gelu'
initializer_range = 0.02
layer_norm_eps = 1e-05
layer_scale_init_value = 0.0
out_features = None
out_indices = None
**kwargs
)
Parameters
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch. NOTE: Only patch size of 4 is supported at the moment.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 64) —
Dimensionality of patch embedding.
depths (List[int], optional, defaults to [2, 2, 6, 2]) —
Number of layers in each level of the encoder.
num_heads (List[int], optional, defaults to [3, 6, 12, 24]) —
Number of attention heads in each layer of the Transformer encoder.
kernel_size (int, optional, defaults to 7) —
Neighborhood Attention kernel size.
dilations (List[List[int]], optional, defaults to [[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]]) —
Dilation value of each NA layer in the Transformer encoder.
mlp_ratio (float, optional, defaults to 3.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (bool, optional, defaults to True) —
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
layer_scale_init_value (float, optional, defaults to 0.0) —
The initial value for the layer scale. Disabled if <=0.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a DinatModel. It is used to instantiate a Dinat
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Dinat
shi-labs/dinat-mini-in1k-224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import DinatConfig, DinatModel
# Initializing a Dinat shi-labs/dinat-mini-in1k-224 style configuration
configuration = DinatConfig()
# Initializing a model (with random weights) from the shi-labs/dinat-mini-in1k-224 style configuration
model = DinatModel(configuration)
# Accessing the model configuration
configuration = model.config
DinatModel
class transformers.DinatModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (DinatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Dinat Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.dinat.modeling_dinat.DinatModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.dinat.modeling_dinat.DinatModelOutput or tuple(torch.FloatTensor)
A transformers.models.dinat.modeling_dinat.DinatModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DinatConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The DinatModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, DinatModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("shi-labs/dinat-mini-in1k-224")
model = DinatModel.from_pretrained("shi-labs/dinat-mini-in1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 7, 7, 512]
DinatForImageClassification
class transformers.DinatForImageClassification
<
source
>
(
config
)
Parameters
config (DinatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Dinat Model transformer with an image classification head on top (a linear layer on top of the final hidden state
of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.dinat.modeling_dinat.DinatImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.dinat.modeling_dinat.DinatImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.dinat.modeling_dinat.DinatImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DinatConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The DinatForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, DinatForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("shi-labs/dinat-mini-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-mini-in1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←DETR
DiT→
Dilated Neighborhood Attention Transformer
Overview
Resources
DinatConfig
DinatModel
DinatForImageClassification
|
OneFormer
Overview
The OneFormer model was proposed in OneFormer: One Transformer to Rule Universal Image Segmentation by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference.
The abstract from the paper is the following:
Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.
Tips:
OneFormer requires two inputs during inference: image and task token.
During training, OneFormer only uses panoptic annotations.
If you want to train the model in a distributed environment across multiple nodes, then one should update the
get_num_masks function inside in the OneFormerLoss class of modeling_oneformer.py. When training on multiple nodes, this should be
set to the average number of target masks across all nodes, as can be seen in the original implementation here.
One can use OneFormerProcessor to prepare input images and task inputs for the model and optional targets for the model. OneformerProcessor wraps OneFormerImageProcessor and CLIPTokenizer into a single instance to both prepare the images and encode the task inputs.
To get the final segmentation, depending on the task, you can call post_process_semantic_segmentation() or post_process_instance_segmentation() or post_process_panoptic_segmentation(). All three tasks can be solved using OneFormerForUniversalSegmentation output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together.
The figure below illustrates the architecture of OneFormer. Taken from the original paper.
This model was contributed by Jitesh Jain. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer.
Demo notebooks regarding inference + fine-tuning on custom data can be found here.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
OneFormer specific outputs
class transformers.models.oneformer.modeling_oneformer.OneFormerModelOutput
<
source
>
(
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
pixel_decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
transformer_decoder_hidden_states: typing.Optional[torch.FloatTensor] = None
transformer_decoder_object_queries: FloatTensor = None
transformer_decoder_contrastive_queries: typing.Optional[torch.FloatTensor] = None
transformer_decoder_mask_predictions: FloatTensor = None
transformer_decoder_class_predictions: FloatTensor = None
transformer_decoder_auxiliary_predictions: typing.Union[typing.Tuple[typing.Dict[str, torch.FloatTensor]], NoneType] = None
text_queries: typing.Optional[torch.FloatTensor] = None
task_token: FloatTensor = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
transformer_decoder_object_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim)) —
Output object queries from the last layer in the transformer decoder.
transformer_decoder_contrastive_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim)) —
Contrastive queries from the transformer decoder.
transformer_decoder_mask_predictions (torch.FloatTensor of shape (batch_size, num_queries, height, width)) —
Mask Predictions from the last layer in the transformer decoder.
transformer_decoder_class_predictions (torch.FloatTensor of shape (batch_size, num_queries, num_classes+1)) —
Class Predictions from the last layer in the transformer decoder.
transformer_decoder_auxiliary_predictions (Tuple of Dict of str, torch.FloatTensor, optional) —
Tuple of class and mask predictions from each layer of the transformer decoder.
text_queries (torch.FloatTensor, optional of shape (batch_size, num_queries, hidden_dim)) —
Text queries derived from the input text list used for calculating contrastive loss during training.
task_token (torch.FloatTensor of shape (batch_size, hidden_dim)) —
1D task token to condition the queries.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self and Cross Attentions weights from transformer decoder.
Class for outputs of OneFormerModel. This class returns all the needed hidden states to compute the logits.
class transformers.models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
class_queries_logits: FloatTensor = None
masks_queries_logits: FloatTensor = None
auxiliary_predictions: typing.List[typing.Dict[str, torch.FloatTensor]] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
pixel_decoder_hidden_states: typing.Optional[typing.List[torch.FloatTensor]] = None
transformer_decoder_hidden_states: typing.Optional[torch.FloatTensor] = None
transformer_decoder_object_queries: FloatTensor = None
transformer_decoder_contrastive_queries: typing.Optional[torch.FloatTensor] = None
transformer_decoder_mask_predictions: FloatTensor = None
transformer_decoder_class_predictions: FloatTensor = None
transformer_decoder_auxiliary_predictions: typing.Union[typing.List[typing.Dict[str, torch.FloatTensor]], NoneType] = None
text_queries: typing.Optional[torch.FloatTensor] = None
task_token: FloatTensor = None
attentions: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
)
Parameters
loss (torch.Tensor, optional) —
The computed loss, returned when labels are present.
class_queries_logits (torch.FloatTensor) —
A tensor of shape (batch_size, num_queries, num_labels + 1) representing the proposed classes for each
query. Note the + 1 is needed because we incorporate the null class.
masks_queries_logits (torch.FloatTensor) —
A tensor of shape (batch_size, num_queries, height, width) representing the proposed masks for each
query.
auxiliary_predictions (List of Dict of str, torch.FloatTensor, optional) —
List of class and mask predictions from each layer of the transformer decoder.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
transformer_decoder_object_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim)) —
Output object queries from the last layer in the transformer decoder.
transformer_decoder_contrastive_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim)) —
Contrastive queries from the transformer decoder.
transformer_decoder_mask_predictions (torch.FloatTensor of shape (batch_size, num_queries, height, width)) —
Mask Predictions from the last layer in the transformer decoder.
transformer_decoder_class_predictions (torch.FloatTensor of shape (batch_size, num_queries, num_classes+1)) —
Class Predictions from the last layer in the transformer decoder.
transformer_decoder_auxiliary_predictions (List of Dict of str, torch.FloatTensor, optional) —
List of class and mask predictions from each layer of the transformer decoder.
text_queries (torch.FloatTensor, optional of shape (batch_size, num_queries, hidden_dim)) —
Text queries derived from the input text list used for calculating contrastive loss during training.
task_token (torch.FloatTensor of shape (batch_size, hidden_dim)) —
1D task token to condition the queries.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self and Cross Attentions weights from transformer decoder.
Class for outputs of OneFormerForUniversalSegmentationOutput.
This output can be directly passed to post_process_semantic_segmentation() or
post_process_instance_segmentation() or
post_process_panoptic_segmentation() depending on the task. Please, see
[`~OneFormerImageProcessor] for details regarding usage.
OneFormerConfig
class transformers.OneFormerConfig
<
source
>
(
backbone_config: typing.Optional[typing.Dict] = None
ignore_value: int = 255
num_queries: int = 150
no_object_weight: int = 0.1
class_weight: float = 2.0
mask_weight: float = 5.0
dice_weight: float = 5.0
contrastive_weight: float = 0.5
contrastive_temperature: float = 0.07
train_num_points: int = 12544
oversample_ratio: float = 3.0
importance_sample_ratio: float = 0.75
init_std: float = 0.02
init_xavier_std: float = 1.0
layer_norm_eps: float = 1e-05
is_training: bool = False
use_auxiliary_loss: bool = True
output_auxiliary_logits: bool = True
strides: typing.Optional[list] = [4, 8, 16, 32]
task_seq_len: int = 77
text_encoder_width: int = 256
text_encoder_context_length: int = 77
text_encoder_num_layers: int = 6
text_encoder_vocab_size: int = 49408
text_encoder_proj_layers: int = 2
text_encoder_n_ctx: int = 16
conv_dim: int = 256
mask_dim: int = 256
hidden_dim: int = 256
encoder_feedforward_dim: int = 1024
norm: str = 'GN'
encoder_layers: int = 6
decoder_layers: int = 10
use_task_norm: bool = True
num_attention_heads: int = 8
dropout: float = 0.1
dim_feedforward: int = 2048
pre_norm: bool = False
enforce_input_proj: bool = False
query_dec_layers: int = 2
common_stride: int = 4
**kwargs
)
Parameters
backbone_config (PretrainedConfig, optional, defaults to SwinConfig) —
The configuration of the backbone model.
ignore_value (int, optional, defaults to 255) —
Values to be ignored in GT label while calculating loss.
num_queries (int, optional, defaults to 150) —
Number of object queries.
no_object_weight (float, optional, defaults to 0.1) —
Weight for no-object class predictions.
class_weight (float, optional, defaults to 2.0) —
Weight for Classification CE loss.
mask_weight (float, optional, defaults to 5.0) —
Weight for binary CE loss.
dice_weight (float, optional, defaults to 5.0) —
Weight for dice loss.
contrastive_weight (float, optional, defaults to 0.5) —
Weight for contrastive loss.
contrastive_temperature (float, optional, defaults to 0.07) —
Initial value for scaling the contrastive logits.
train_num_points (int, optional, defaults to 12544) —
Number of points to sample while calculating losses on mask predictions.
oversample_ratio (float, optional, defaults to 3.0) —
Ratio to decide how many points to oversample.
importance_sample_ratio (float, optional, defaults to 0.75) —
Ratio of points that are sampled via importance sampling.
init_std (float, optional, defaults to 0.02) —
Standard deviation for normal intialization.
init_xavier_std (float, optional, defaults to 0.02) —
Standard deviation for xavier uniform initialization.
layer_norm_eps (float, optional, defaults to 1e-05) —
Epsilon for layer normalization.
is_training (bool, optional, defaults to False) —
Whether to run in training or inference mode.
use_auxiliary_loss (bool, optional, defaults to True) —
Whether to calculate loss using intermediate predictions from transformer decoder.
output_auxiliary_logits (bool, optional, defaults to True) —
Whether to return intermediate predictions from transformer decoder.
strides (list, optional, defaults to [4, 8, 16, 32]) —
List containing the strides for feature maps in the encoder.
task_seq_len (int, optional, defaults to 77) —
Sequence length for tokenizing text list input.
text_encoder_width (int, optional, defaults to 256) —
Hidden size for text encoder.
text_encoder_context_length (int, optional, defaults to 77) —
Input sequence length for text encoder.
text_encoder_num_layers (int, optional, defaults to 6) —
Number of layers for transformer in text encoder.
text_encoder_vocab_size (int, optional, defaults to 49408) —
Vocabulary size for tokenizer.
text_encoder_proj_layers (int, optional, defaults to 2) —
Number of layers in MLP for project text queries.
text_encoder_n_ctx (int, optional, defaults to 16) —
Number of learnable text context queries.
conv_dim (int, optional, defaults to 256) —
Feature map dimension to map outputs from the backbone.
mask_dim (int, optional, defaults to 256) —
Dimension for feature maps in pixel decoder.
hidden_dim (int, optional, defaults to 256) —
Dimension for hidden states in transformer decoder.
encoder_feedforward_dim (int, optional, defaults to 1024) —
Dimension for FFN layer in pixel decoder.
norm (str, optional, defaults to GN) —
Type of normalization.
encoder_layers (int, optional, defaults to 6) —
Number of layers in pixel decoder.
decoder_layers (int, optional, defaults to 10) —
Number of layers in transformer decoder.
use_task_norm (bool, optional, defaults to True) —
Whether to normalize the task token.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads in transformer layers in the pixel and transformer decoders.
dropout (float, optional, defaults to 0.1) —
Dropout probability for pixel and transformer decoders.
dim_feedforward (int, optional, defaults to 2048) —
Dimension for FFN layer in transformer decoder.
pre_norm (bool, optional, defaults to False) —
Whether to normalize hidden states before attention layers in transformer decoder.
enforce_input_proj (bool, optional, defaults to False) —
Whether to project hidden states in transformer decoder.
query_dec_layers (int, optional, defaults to 2) —
Number of layers in query transformer.
common_stride (int, optional, defaults to 4) —
Common stride used for features in pixel decoder.
This is the configuration class to store the configuration of a OneFormerModel. It is used to instantiate a
OneFormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the OneFormer
shi-labs/oneformer_ade20k_swin_tiny architecture
trained on ADE20k-150.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import OneFormerConfig, OneFormerModel
# Initializing a OneFormer shi-labs/oneformer_ade20k_swin_tiny configuration
configuration = OneFormerConfig()
# Initializing a model (with random weights) from the shi-labs/oneformer_ade20k_swin_tiny style configuration
model = OneFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:
Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,
OneFormerImageProcessor
class transformers.OneFormerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: float = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
ignore_index: typing.Optional[int] = None
do_reduce_labels: bool = False
repo_path: str = 'shi-labs/oneformer_demo'
class_info_file: str = None
num_text: typing.Optional[int] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the input to a certain size.
size (int, optional, defaults to 800) —
Resize the input to the given size. Only has an effect if do_resize is set to True. If size is a
sequence like (width, height), output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).
max_size (int, optional, defaults to 1333) —
The largest size an image dimension can have (otherwise it’s capped). Only has an effect if do_resize is
set to True.
resample (int, optional, defaults to PIL.Image.Resampling.BILINEAR) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST,
PIL.Image.Resampling.BOX, PIL.Image.Resampling.BILINEAR, PIL.Image.Resampling.HAMMING,
PIL.Image.Resampling.BICUBIC or PIL.Image.Resampling.LANCZOS. Only has an effect if do_resize is set
to True.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the input to a certain scale.
rescale_factor (float, optional, defaults to 1/ 255) —
Rescale the input by the given factor. Only has an effect if do_rescale is set to True.
do_normalize (bool, optional, defaults to True) —
Whether or not to normalize the input with mean and standard deviation.
image_mean (int, optional, defaults to [0.485, 0.456, 0.406]) —
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean.
image_std (int, optional, defaults to [0.229, 0.224, 0.225]) —
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std.
ignore_index (int, optional) —
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
denoted with 0 (background) will be replaced with ignore_index.
do_reduce_labels (bool, optional, defaults to False) —
Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k).
The background label will be replaced by ignore_index.
repo_path (str, defaults to shi-labs/oneformer_demo) —
Dataset repository on huggingface hub containing the JSON file with class information for the dataset.
class_info_file (str) —
JSON file containing class information for the dataset. It is stored inside on the repo_path dataset
repository.
num_text (int, optional) —
Number of text entries in the text input list.
Constructs a OneFormer image processor. The image processor can be used to prepare image(s), task input(s) and
optional text inputs and targets for the model.
This image processor inherits from BaseImageProcessor which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
task_inputs: typing.Optional[typing.List[str]] = None
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None
instance_id_to_semantic_id: typing.Union[typing.Dict[int, int], NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
ignore_index: typing.Optional[int] = None
do_reduce_labels: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
encode_inputs
<
source
>
(
pixel_values_list: typing.List[typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]]
task_inputs: typing.List[str]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] = None
instance_id_to_semantic_id: typing.Union[typing.List[typing.Dict[int, int]], typing.Dict[int, int], NoneType] = None
ignore_index: typing.Optional[int] = None
reduce_labels: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
)
→
BatchFeature
Parameters
pixel_values_list (List[ImageInput]) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (channels, height, width).
task_inputs (List[str]) —
List of task values.
segmentation_maps (ImageInput, optional) —
The corresponding semantic segmentation maps with the pixel-wise annotations.
(bool, optional, defaults to True):
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
instance_id_to_semantic_id (List[Dict[int, int]] or Dict[int, int], optional) —
A mapping between object instance ids and class ids. If passed, segmentation_maps is treated as an
instance segmentation map where each pixel represents an instance id. Can be provided as a single
dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map
instance ids in each image separately.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt', return PyTorch torch.Tensor
objects.
Returns
BatchFeature
A BatchFeature with the following fields:
pixel_values — Pixel values to be fed to a model.
pixel_mask — Pixel mask to be fed to a model (when =True or if pixel_mask is in
self.model_input_names).
mask_labels — Optional list of mask labels of shape (labels, height, width) to be fed to a model
(when annotations are provided).
class_labels — Optional list of class labels of shape (labels) to be fed to a model (when
annotations are provided). They identify the labels of mask_labels, e.g. the label of
mask_labels[i][j] if class_labels[i][j].
text_inputs — Optional list of text string entries to be fed to a model (when annotations are
provided). They identify the binary masks present in the image.
Pad images up to the largest image in a batch and create a corresponding pixel_mask.
OneFormer addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps
will be converted to lists of binary masks and their respective labels. Let’s see an example, assuming
segmentation_maps = [[2,6,7,9]], the output will contain mask_labels = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]] (four binary masks) and class_labels = [2,6,7,9], the labels for
each mask.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[torch.Tensor]
Parameters
outputs (MaskFormerForInstanceSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of MaskFormerForInstanceSegmentation into semantic segmentation maps. Only supports
PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
task_type: str = 'instance'
is_demo: bool = True
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (OneFormerForUniversalSegmentationOutput) —
The outputs from OneFormerForUniversalSegmentationOutput.
task_type (str, optional), defaults to “instance”) —
The post processing depends on the task token input. If the task_type is “panoptic”, we need to
ignore the stuff predictions.
is_demo (bool, optional), defaults to True) —
Whether the model is in demo mode. If true, use threshold to predict final masks.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized.
return_coco_annotation (bool, optional), defaults to False) —
Whether to return predictions in COCO format.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id, set
to None if no mask if found above threshold. If target_sizes is specified, segmentation is resized
to the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of OneFormerForUniversalSegmentationOutput into image instance segmentation
predictions. Only supports PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (MaskFormerForInstanceSegmentationOutput) —
The outputs from MaskFormerForInstanceSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id, set
to None if no mask if found above threshold. If target_sizes is specified, segmentation is resized
to the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of MaskFormerForInstanceSegmentationOutput into image panoptic segmentation
predictions. Only supports PyTorch.
OneFormerProcessor
class transformers.OneFormerProcessor
<
source
>
(
image_processor = None
tokenizer = None
max_seq_length: int = 77
task_seq_length: int = 77
**kwargs
)
Parameters
image_processor (OneFormerImageProcessor) —
The image processor is a required input.
tokenizer ([CLIPTokenizer, CLIPTokenizerFast]) —
The tokenizer is a required input.
max_seq_len (int, optional, defaults to 77)) —
Sequence length for input text list.
task_seq_len (int, optional, defaults to 77) —
Sequence length for input task token.
Constructs an OneFormer processor which wraps OneFormerImageProcessor and
CLIPTokenizer/CLIPTokenizerFast into a single processor that inherits both the image processor and
tokenizer functionalities.
encode_inputs
<
source
>
(
images = None
task_inputs = None
segmentation_maps = None
**kwargs
)
This method forwards all its arguments to OneFormerImageProcessor.encode_inputs() and then tokenizes the
task_inputs. Please refer to the docstring of this method for more information.
post_process_instance_segmentation
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to OneFormerImageProcessor.post_process_instance_segmentation().
Please refer to the docstring of this method for more information.
post_process_panoptic_segmentation
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to OneFormerImageProcessor.post_process_panoptic_segmentation().
Please refer to the docstring of this method for more information.
post_process_semantic_segmentation
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to OneFormerImageProcessor.post_process_semantic_segmentation().
Please refer to the docstring of this method for more information.
OneFormerModel
class transformers.OneFormerModel
<
source
>
(
config: OneFormerConfig
)
Parameters
config (OneFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare OneFormer Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
pixel_values: Tensor
task_inputs: Tensor
text_inputs: typing.Optional[torch.Tensor] = None
pixel_mask: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.oneformer.modeling_oneformer.OneFormerModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using OneFormerProcessor. See
OneFormerProcessor.__call__() for details.
task_inputs (torch.FloatTensor of shape (batch_size, sequence_length)) —
Task inputs. Task inputs can be obtained using AutoImageProcessor. See OneFormerProcessor.__call__()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers.
return_dict (bool, optional) —
Whether or not to return a ~OneFormerModelOutput instead of a plain tuple.
Returns
transformers.models.oneformer.modeling_oneformer.OneFormerModelOutput or tuple(torch.FloatTensor)
A transformers.models.oneformer.modeling_oneformer.OneFormerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OneFormerConfig) and inputs.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
transformer_decoder_object_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim))
Output object queries from the last layer in the transformer decoder.
transformer_decoder_contrastive_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim))
Contrastive queries from the transformer decoder.
transformer_decoder_mask_predictions (torch.FloatTensor of shape (batch_size, num_queries, height, width))
Mask Predictions from the last layer in the transformer decoder.
transformer_decoder_class_predictions (torch.FloatTensor of shape (batch_size, num_queries, num_classes+1)) — Class Predictions from the last layer in the transformer decoder.
transformer_decoder_auxiliary_predictions (Tuple of Dict of str, torch.FloatTensor, optional) — Tuple of class and mask predictions from each layer of the transformer decoder.
text_queries (torch.FloatTensor, optional of shape (batch_size, num_queries, hidden_dim))
Text queries derived from the input text list used for calculating contrastive loss during training.
task_token (torch.FloatTensor of shape (batch_size, hidden_dim))
1D task token to condition the queries.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self and Cross Attentions weights from transformer decoder.
OneFormerModelOutput
The OneFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from PIL import Image
import requests
from transformers import OneFormerProcessor, OneFormerModel
# download texting image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# load processor for preprocessing the inputs
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
model = OneFormerModel.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
inputs = processor(image, ["semantic"], return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
mask_predictions = outputs.transformer_decoder_mask_predictions
class_predictions = outputs.transformer_decoder_class_predictions
f"👉 Mask Predictions Shape: {list(mask_predictions.shape)}, Class Predictions Shape: {list(class_predictions.shape)}"
'👉 Mask Predictions Shape: [1, 150, 128, 171], Class Predictions Shape: [1, 150, 151]'
OneFormerForUniversalSegmentation
class transformers.OneFormerForUniversalSegmentation
<
source
>
(
config: OneFormerConfig
)
Parameters
config (OneFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
OneFormer Model for instance, semantic and panoptic image segmentation.
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
pixel_values: Tensor
task_inputs: Tensor
text_inputs: typing.Optional[torch.Tensor] = None
mask_labels: typing.Optional[typing.List[torch.Tensor]] = None
class_labels: typing.Optional[typing.List[torch.Tensor]] = None
pixel_mask: typing.Optional[torch.Tensor] = None
output_auxiliary_logits: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using OneFormerProcessor. See
OneFormerProcessor.__call__() for details.
task_inputs (torch.FloatTensor of shape (batch_size, sequence_length)) —
Task inputs. Task inputs can be obtained using AutoImageProcessor. See OneFormerProcessor.__call__()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers.
return_dict (bool, optional) —
Whether or not to return a ~OneFormerModelOutput instead of a plain tuple.
text_inputs (List[torch.Tensor], optional) —
Tensor fof shape (num_queries, sequence_length) to be fed to a model
mask_labels (List[torch.Tensor], optional) —
List of mask labels of shape (num_labels, height, width) to be fed to a model
class_labels (List[torch.LongTensor], optional) —
list of target class labels of shape (num_labels, height, width) to be fed to a model. They identify the
labels of mask_labels, e.g. the label of mask_labels[i][j] if class_labels[i][j].
Returns
transformers.models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OneFormerConfig) and inputs.
loss (torch.Tensor, optional) — The computed loss, returned when labels are present.
class_queries_logits (torch.FloatTensor) — A tensor of shape (batch_size, num_queries, num_labels + 1) representing the proposed classes for each
query. Note the + 1 is needed because we incorporate the null class.
masks_queries_logits (torch.FloatTensor) — A tensor of shape (batch_size, num_queries, height, width) representing the proposed masks for each
query.
auxiliary_predictions (List of Dict of str, torch.FloatTensor, optional) — List of class and mask predictions from each layer of the transformer decoder.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the encoder
model at the output of each stage.
pixel_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.
transformer_decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.
transformer_decoder_object_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim))
Output object queries from the last layer in the transformer decoder.
transformer_decoder_contrastive_queries (torch.FloatTensor of shape (batch_size, num_queries, hidden_dim))
Contrastive queries from the transformer decoder.
transformer_decoder_mask_predictions (torch.FloatTensor of shape (batch_size, num_queries, height, width))
Mask Predictions from the last layer in the transformer decoder.
transformer_decoder_class_predictions (torch.FloatTensor of shape (batch_size, num_queries, num_classes+1)) — Class Predictions from the last layer in the transformer decoder.
transformer_decoder_auxiliary_predictions (List of Dict of str, torch.FloatTensor, optional) — List of class and mask predictions from each layer of the transformer decoder.
text_queries (torch.FloatTensor, optional of shape (batch_size, num_queries, hidden_dim))
Text queries derived from the input text list used for calculating contrastive loss during training.
task_token (torch.FloatTensor of shape (batch_size, hidden_dim))
1D task token to condition the queries.
attentions (tuple(tuple(torch.FloatTensor)), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tuple(torch.FloatTensor) (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Self and Cross Attentions weights from transformer decoder.
OneFormerUniversalSegmentationOutput
The OneFormerForUniversalSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Universal segmentation example:
Copied
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
import torch
# load OneFormer fine-tuned on ADE20k for universal segmentation
processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny")
url = (
... "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
... )
image = Image.open(requests.get(url, stream=True).raw)
# Semantic Segmentation
inputs = processor(image, ["semantic"], return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for semantic postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]
f"👉 Semantic Predictions Shape: {list(predicted_semantic_map.shape)}"
'👉 Semantic Predictions Shape: [512, 683]'
# Instance Segmentation
inputs = processor(image, ["instance"], return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for instance postprocessing
predicted_instance_map = processor.post_process_instance_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]["segmentation"]
f"👉 Instance Predictions Shape: {list(predicted_instance_map.shape)}"
'👉 Instance Predictions Shape: [512, 683]'
# Panoptic Segmentation
inputs = processor(image, ["panoptic"], return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for panoptic postprocessing
predicted_panoptic_map = processor.post_process_panoptic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]["segmentation"]
f"👉 Panoptic Predictions Shape: {list(predicted_panoptic_map.shape)}"
'👉 Panoptic Predictions Shape: [512, 683]'
←MGP-STR
OWL-ViT→
OneFormer
Overview
Resources
OneFormer specific outputs
OneFormerConfig
OneFormerImageProcessor
OneFormerProcessor
OneFormerModel
OneFormerForUniversalSegmentation
|
Wav2Vec2Phoneme
Overview
The Wav2Vec2Phoneme model was proposed in Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al.,
2021 by Qiantong Xu, Alexei Baevski, Michael Auli.
The abstract from the paper is the following:
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech
recognition systems without any labeled data. However, in many cases there is labeled data available for related
languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer
learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by
mapping phonemes of the training languages to the target language using articulatory features. Experiments show that
this simple method significantly outperforms prior work which introduced task-specific architectures and used only part
of a monolingually pretrained model.
Tips:
Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2
Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using Wav2Vec2PhonemeCTCTokenizer.
Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass
to a sequence of phonemes
By default the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one
should make use of a dictionary and language model.
Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition.
This model was contributed by patrickvonplaten
The original code can be found here.
Wav2Vec2Phoneme’s architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2’s documentation page except for the tokenizer.
Wav2Vec2PhonemeCTCTokenizer
class transformers.Wav2Vec2PhonemeCTCTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
phone_delimiter_token = ' '
word_delimiter_token = None
do_phonemize = True
phonemizer_lang = 'en-us'
phonemizer_backend = 'espeak'
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
bos_token (str, optional, defaults to "<s>") —
The beginning of sentence token.
eos_token (str, optional, defaults to "</s>") —
The end of sentence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
do_phonemize (bool, optional, defaults to True) —
Whether the tokenizer should phonetize the input or not. Only if a sequence of phonemes is passed to the
tokenizer, do_phonemize should be set to False.
phonemizer_lang (str, optional, defaults to "en-us") —
The language of the phoneme set to which the tokenizer should phonetize the input text to.
phonemizer_backend (str, optional. defaults to "espeak") —
The backend phonetization library that shall be used by the phonemizer library. Defaults to espeak-ng.
See the phonemizer package. for more information.
**kwargs —
Additional keyword arguments passed along to PreTrainedTokenizer
Constructs a Wav2Vec2PhonemeCTC tokenizer.
This tokenizer inherits from PreTrainedTokenizer which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
is_split_into_words: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
batch_decode
<
source
>
(
sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
output_char_offsets: bool = False
**kwargs
)
→
List[str] or ~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces.
output_char_offsets (bool, optional, defaults to False) —
Whether or not to output character offsets. Character offsets can be used in combination with the
sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.
Please take a look at the Example of ~models.wav2vec2.tokenization_wav2vec2.decode to better
understand how to make use of output_word_offsets.
~model.wav2vec2_phoneme.tokenization_wav2vec2_phoneme.batch_decode works analogous with phonemes
and batched output.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
List[str] or ~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput
The
decoded sentence. Will be a
~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput when
output_char_offsets == True.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
<
source
>
(
token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
output_char_offsets: bool = False
**kwargs
)
→
str or ~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces.
output_char_offsets (bool, optional, defaults to False) —
Whether or not to output character offsets. Character offsets can be used in combination with the
sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.
Please take a look at the Example of ~models.wav2vec2.tokenization_wav2vec2.decode to better
understand how to make use of output_word_offsets.
~model.wav2vec2_phoneme.tokenization_wav2vec2_phoneme.batch_decode works the same way with
phonemes.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
str or ~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput
The decoded
sentence. Will be a ~models.wav2vec2.tokenization_wav2vec2_phoneme.Wav2Vec2PhonemeCTCTokenizerOutput
when output_char_offsets == True.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
phonemize
<
source
>
(
text: str
phonemizer_lang: typing.Optional[str] = None
)
←Wav2Vec2-Conformer
WavLM→
Wav2Vec2Phoneme
Overview
Wav2Vec2PhonemeCTCTokenizer
|
NLLB-MOE
Overview
The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussà, James Cross, Onur Çelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the
200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by
first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed
at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of
Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training
improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using
a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety.
Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.
Tips:
M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE
The NLLB-MoE is very similar to the NLLB model, but it’s feed forward layer is based on the implementation of SwitchTransformers.
The tokenizer is the same as the NLLB models.
This model was contributed by Arthur Zucker.
The original code can be found here.
Implementation differences with SwitchTransformers
The biggest difference is the way the tokens are routed. NLLB-MoE uses a `top-2-gate` which means that for each input, only the top two experts are selected based on the
highest predicted probabilities from the gating network, and the remaining experts are ignored. In `SwitchTransformers`, only the top-1 probabilities are computed,
which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, `SwitchTransformers` still adds its unmodified hidden
states (kind of like a residual connection) while they are masked in `NLLB`'s top-2 routing mechanism.
Generating with NLLB-MoE
The avalable checkpoints requires around 350GB of storage. Make sure to use `accelerate` if you do not have enough RAM on your machine.
While generating the target text set the forced_bos_token_id to the target language id. The following
example shows how to translate English to French using the facebook/nllb-200-distilled-600M model.
Note that we’re using the BCP-47 code for French fra_Latn. See here
for the list of all BCP-47 in the Flores 200 dataset.
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
article = "Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage."
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=50
... )
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la société avait commencé lorsque sa sonnette n'était pas audible depuis son magasin dans son garage."
Generating from any other language than English
English (eng_Latn) is set as the default language from which to translate. In order to specify that you’d like to translate from a different language,
you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization.
See example below for a translation from romanian to german:
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b", src_lang="ron_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
article = "Şeful ONU spune că nu există o soluţie militară în Siria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30
... )
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
Documentation resources
Translation task guide
Summarization task guide
NllbMoeConfig
class transformers.NllbMoeConfig
<
source
>
(
vocab_size = 128112
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.05
decoder_layerdrop = 0.05
use_cache = True
is_encoder_decoder = True
activation_function = 'relu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.1
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 2
scale_embedding = True
router_bias = False
router_dtype = 'float32'
router_ignore_padding_tokens = False
num_experts = 128
expert_capacity = 64
encoder_sparse_step = 4
decoder_sparse_step = 4
router_z_loss_coef = 0.001
router_aux_loss_coef = 0.001
second_expert_policy = 'all'
normalize_router_prob_before_dropping = False
batch_prioritized_routing = False
moe_eval_capacity_token_fraction = 1.0
moe_token_dropout = 0.2
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
output_router_logits = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the NllbMoe model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling NllbMoeModel or
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in encoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
second_expert_policy ( str, optional, default to "all") —
The policy used for the sampling the probability of being sampled to a second expert for each token.
normalize_router_prob_before_dropping (bool, optional, defaults to True) —
Whether or not to normalize the router probabilities before applying a mask based on the experts capacity
(capacity dropping).
batch_prioritized_routing (bool, optional, defaults to True) —
Whether or not to orders the tokens by their router probabilities before capacity dropping. This means that
the tokens that have the highest probabilities will be routed before other tokens that might be further in
the sequence.
moe_eval_capacity_token_fraction (float, optional, defaults to 1.0) —
Fraction of tokens as capacity during validation, if set to negative, uses the same as training. Should be
in range: (0.0, 1.0].
num_experts (int, optional, defaults to 128) —
Number of experts for each NllbMoeSparseMlp layer.
expert_capacity (int, optional, defaults to 64) —
Number of tokens that can be stored in each expert.
encoder_sparse_step (int, optional, defaults to 4) —
Frequency of the sparse layers in the encoder. 4 means that one out of 4 layers will be sparse.
decoder_sparse_step (int, optional, defaults to 4) —
Frequency of the sparse layers in the decoder. 4 means that one out of 4 layers will be sparse.
router_dtype (str, optional, default to "float32") —
The dtype used for the routers. It is preferable to keep the dtype to "float32" as specified in the
selective precision discussion in the paper.
router_ignore_padding_tokens (bool, optional, defaults to False) —
Whether to ignore padding tokens when routing. if False, the padding tokens are not routed to any
experts.
router_bias (bool, optional, defaults to False) —
Whether or not the classifier of the router should have a bias.
moe_token_dropout (float, optional, defualt ot 0.2) —
Masking rate for MoE expert output masking (EOM), which is implemented via a Dropout2d on the expert
outputs.
output_router_logits (bool, optional, defaults to False) —
Whether or not to return the router logits. Only set to True to get the auxiliary loss when training.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a NllbMoeModel. It is used to instantiate an
NLLB-MoE model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the NLLB-MoE
facebook/nllb-moe-54b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import NllbMoeModel, NllbMoeConfig
# Initializing a NllbMoe facebook/nllb-moe-54b style configuration
configuration = NllbMoeConfig()
# Initializing a model from the facebook/nllb-moe-54b style configuration
model = NllbMoeModel(configuration)
# Accessing the model configuration
configuration = model.config
NllbMoeTop2Router
class transformers.NllbMoeTop2Router
<
source
>
(
config: NllbMoeConfig
)
Router using tokens choose top-2 experts assignment.
This router uses the same mechanism as in NLLB-MoE from the fairseq repository. Items are sorted by router_probs
and then routed to their choice of expert until the expert’s expert_capacity is reached. There is no guarantee
that each token is processed by an expert, or that each expert receives at least one token.
The router combining weights are also returned to make sure that the states that are not updated will be masked.
route_tokens
<
source
>
(
router_logits: Tensor
input_dtype: dtype = torch.float32
padding_mask: typing.Optional[torch.LongTensor] = None
)
Computes the dispatch_mask and the dispatch_weights for each experts. The masks are adapted to the expert
capacity.
forward
<
source
>
(
hidden_states: Tensor
padding_mask: typing.Optional[torch.LongTensor] = None
)
→
top_1_mask (torch.Tensor of shape (batch_size, sequence_length))
Parameters
hidden_states (torch.Tensor) —
(batch_size, sequence_length, hidden_dim) from which router probabilities are computed.
Returns
top_1_mask (torch.Tensor of shape (batch_size, sequence_length))
Index tensor of shape [batch_size, sequence_length] corresponding to the expert selected for each token
using the top1 probabilities of the router.
router_probabilities (torch.Tensor of shape (batch_size, sequence_length, nump_experts)):
Tensor of shape (batch_size, sequence_length, num_experts) corresponding to the probabilities for each
token and expert. Used for routing tokens to experts.
router_logits (torch.Tensor of shape (batch_size, sequence_length))):
Logits tensor of shape (batch_size, sequence_length, num_experts) corresponding to raw router logits.
This is used later for computing router z-loss.
The hidden states are reshaped to simplify the computation of the router probabilities (combining weights for
each experts.)
NllbMoeSparseMLP
class transformers.NllbMoeSparseMLP
<
source
>
(
config: NllbMoeConfig
ffn_dim: int
expert_class: Module = <class 'transformers.models.nllb_moe.modeling_nllb_moe.NllbMoeDenseActDense'>
)
Implementation of the NLLB-MoE sparse MLP module.
forward
<
source
>
(
hidden_states: Tensor
padding_mask: typing.Optional[torch.Tensor] = False
)
→
hidden_states (torch.Tensor of shape (batch_size, sequence_length, hidden_dim))
Parameters
hidden_states (torch.Tensor of shape (batch_size, sequence_length, hidden_dim)) —
The hidden states
padding_mask (torch.Tensor, optional, defaults to False) —
Attention mask. Can be in the causal form or not.
Returns
hidden_states (torch.Tensor of shape (batch_size, sequence_length, hidden_dim))
Updated hidden states
router_logits (torch.Tensor of shape (batch_size, sequence_length, num_experts)):
Needed for computing the loss
The goal of this forward pass is to have the same number of operation as the equivalent NllbMoeDenseActDense
(mlp) layer. This means that all of the hidden states should be processed at most twice ( since we are using a
top_2 gating mecanism). This means that we keep the complexity to O(batch_size x sequence_length x hidden_dim)
instead of O(num_experts x batch_size x sequence_length x hidden_dim).
1- Get the router_probs from the router. The shape of the router_mask is (batch_size X sequence_length, num_expert) and corresponds to the boolean version of the router_probs. The inputs are masked using the
router_mask.
2- Dispatch the hidden_states to its associated experts. The router probabilities are used to weight the
contribution of each experts when updating the masked hidden states.
NllbMoeModel
class transformers.NllbMoeModel
<
source
>
(
config: NllbMoeConfig
)
Parameters
config (NllbMoeConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare NllbMoe Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqMoEModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Returns
transformers.modeling_outputs.Seq2SeqMoEModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqMoEModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NllbMoeConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the encoder model, useful to compute the auxiliary loss and the z_loss for the sparse
modules.
The NllbMoeModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
The NllbMoeModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NllbMoeModel
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts")
model = SwitchTransformersModel.from_pretrained("hf-internal-testing/random-nllb-moe-2-experts")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for NllbMoeModel
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
NllbMoeForConditionalGeneration
class transformers.NllbMoeForConditionalGeneration
<
source
>
(
config: NllbMoeConfig
)
Parameters
config (NllbMoeConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The NllbMoe Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_router_logits: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqMoEOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
NllbMoe uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_router_logits (bool, optional) —
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
should not be returned during inference.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqMoEOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqMoEOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NllbMoeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).
Router logits of the encoder model, useful to compute the auxiliary loss and z_loss for Mixture of Experts
models.
The NllbMoeForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Translation example:
Copied
from transformers import AutoTokenizer, NllbMoeForConditionalGeneration
model = NllbMoeForConditionalGeneration.from_pretrained("facebook/nllb-moe-54b")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b")
text_to_translate = "Life is like a box of chocolates"
model_inputs = tokenizer(text_to_translate, return_tensors="pt")
# translate to French
gen_tokens = model.generate(**model_inputs, forced_bos_token_id=tokenizer.get_lang_id("eng_Latn"))
print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True))
←NLLB
Nyströmformer→
NLLB-MOE
Overview
Implementation differences with SwitchTransformers
Generating with NLLB-MoE
Generating from any other language than English
Documentation resources
NllbMoeConfig
NllbMoeTop2Router
NllbMoeSparseMLP
NllbMoeModel
NllbMoeForConditionalGeneration
|
PhoBERT
Overview
The PhoBERT model was proposed in PhoBERT: Pre-trained language models for Vietnamese by Dat Quoc Nguyen, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual
language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent
best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple
Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and
Natural language inference.
Example of use:
Copied
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
... features = phobert(input_ids) # Models outputs are now tuples
# With TensorFlow 2.0+:
# from transformers import TFAutoModel
# phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
This model was contributed by dqnguyen. The original code can be found here.
PhobertTokenizer
class transformers.PhobertTokenizer
<
source
>
(
vocab_file
merges_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
bos_token (st, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Construct a PhoBERT tokenizer. Based on Byte-Pair-Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
add_from_file
<
source
>
(
f
)
Loads a pre-existing dictionary from a text file and adds its symbols to this instance.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A PhoBERT sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. PhoBERT does not
make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
←PEGASUS-X
PLBart→
PhoBERT
Overview
PhobertTokenizer
|
DETA
Overview
The DETA model was proposed in NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
DETA (short for Detection Transformers with Assignment) improves Deformable DETR by replacing the one-to-one bipartite Hungarian matching loss
with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP.
The abstract from the paper is the following:
Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture.
Tips:
One can use DetaImageProcessor to prepare images and optional targets for the model.
DETA overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA.
Demo notebooks for DETA can be found here.
See also: Object detection task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DetaConfig
class transformers.DetaConfig
<
source
>
(
backbone_config = None
num_queries = 900
max_position_embeddings = 2048
encoder_layers = 6
encoder_ffn_dim = 2048
encoder_attention_heads = 8
decoder_layers = 6
decoder_ffn_dim = 1024
decoder_attention_heads = 8
encoder_layerdrop = 0.0
is_encoder_decoder = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
init_xavier_std = 1.0
return_intermediate = True
auxiliary_loss = False
position_embedding_type = 'sine'
num_feature_levels = 5
encoder_n_points = 4
decoder_n_points = 4
two_stage = True
two_stage_num_proposals = 300
with_box_refine = True
assign_first_stage = True
class_cost = 1
bbox_cost = 5
giou_cost = 2
mask_loss_coefficient = 1
dice_loss_coefficient = 1
bbox_loss_coefficient = 5
giou_loss_coefficient = 2
eos_coefficient = 0.1
focal_alpha = 0.25
**kwargs
)
Parameters
backbone_config (PretrainedConfig or dict, optional, defaults to ResNetConfig()) —
The configuration of the backbone model.
num_queries (int, optional, defaults to 900) —
Number of object queries, i.e. detection slots. This is the maximal number of objects DetaModel can
detect in a single image. In case two_stage is set to True, we use two_stage_num_proposals instead.
d_model (int, optional, defaults to 256) —
Dimension of the layers.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (float, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (bool, optional, defaults to False) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (str, optional, defaults to "sine") —
Type of position embeddings to be used on top of the image features. One of "sine" or "learned".
class_cost (float, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (float, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (float, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (float, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
num_feature_levels (int, optional, defaults to 5) —
The number of input feature levels.
encoder_n_points (int, optional, defaults to 4) —
The number of sampled keys in each feature level for each attention head in the encoder.
decoder_n_points (int, optional, defaults to 4) —
The number of sampled keys in each feature level for each attention head in the decoder.
two_stage (bool, optional, defaults to True) —
Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of
DETA, which are further fed into the decoder for iterative bounding box refinement.
two_stage_num_proposals (int, optional, defaults to 300) —
The number of region proposals to be generated, in case two_stage is set to True.
with_box_refine (bool, optional, defaults to True) —
Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes
based on the predictions from the previous layer.
focal_alpha (float, optional, defaults to 0.25) —
Alpha parameter in the focal loss.
This is the configuration class to store the configuration of a DetaModel. It is used to instantiate a DETA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DETA
SenseTime/deformable-detr architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import DetaConfig, DetaModel
# Initializing a DETA SenseTime/deformable-detr style configuration
configuration = DetaConfig()
# Initializing a model (with random weights) from the SenseTime/deformable-detr style configuration
model = DetaModel(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:
Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,
DetaImageProcessor
class transformers.DetaImageProcessor
<
source
>
(
format: typing.Union[str, transformers.models.deta.image_processing_deta.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'>
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
do_pad: bool = True
**kwargs
)
Parameters
format (str, optional, defaults to "coco_detection") —
Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
do_resize (bool, optional, defaults to True) —
Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be
overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}):
Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in
the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image.
do_rescale (bool, optional, defaults to True) —
Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) —
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) —
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be
overridden by the do_pad parameter in the preprocess method.
Constructs a Deformable DETR image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
annotations: typing.Union[typing.List[typing.Dict], typing.List[typing.List[typing.Dict]], NoneType] = None
return_segmentation_masks: bool = None
masks_path: typing.Union[str, pathlib.Path, NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Union[int, float, NoneType] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
format: typing.Union[str, transformers.models.deta.image_processing_deta.AnnotionFormat, NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image or batch of images to preprocess.
annotations (List[Dict] or List[List[Dict]], optional) —
List of annotations associated with the image or batch of images. If annotionation is for object
detection, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“annotations” (List[Dict]): List of annotations for an image. Each annotation should be a
dictionary. An image can have no annotations, in which case the list should be empty.
If annotionation is for segmentation, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary.
An image can have no segments, in which case the list should be empty.
“file_name” (str): The file name of the image.
return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) —
Whether to return segmentation masks.
masks_path (str or pathlib.Path, optional) —
Path to the directory containing the segmentation masks.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use when resizing the image.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to use when rescaling the image.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Mean to use when normalizing the image.
image_std (float or List[float], optional, defaults to self.image_std) —
Standard deviation to use when normalizing the image.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image.
format (str or AnnotionFormat, optional, defaults to self.format) —
Format of the annotations.
return_tensors (str or TensorType, optional, defaults to self.return_tensors) —
Type of tensors to return. If None, will return the list of images.
data_format (str or ChannelDimension, optional, defaults to self.data_format) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Preprocess an image or a batch of images so that it can be used by the model.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
nms_threshold: float = 0.7
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If left to None, predictions will not be resized.
nms_threshold (float, optional, defaults to 0.7) —
NMS threshold.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the output of DetaForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
DetaModel
class transformers.DetaModel
<
source
>
(
config: DetaConfig
)
Parameters
config (DetaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare DETA Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without
any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.deta.modeling_deta.DetaModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.deta.modeling_deta.DetaModelOutput or tuple(torch.FloatTensor)
A transformers.models.deta.modeling_deta.DetaModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DetaConfig) and inputs.
init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder).
intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder).
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer
plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are
picked as region proposals in the first stage. Output of bounding box binary classification (i.e.
foreground and background).
enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage.
The DetaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DetaModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large-o365")
model = DetaModel.from_pretrained("jozhang97/deta-swin-large-o365", two_stage=False)
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 900, 256]
DetaForObjectDetection
class transformers.DetaForObjectDetection
<
source
>
(
config: DetaConfig
)
Parameters
config (DetaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
DETA Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks
such as COCO detection.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.deta.modeling_deta.DetaObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch
respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
Returns
transformers.models.deta.modeling_deta.DetaObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.deta.modeling_deta.DetaObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DetaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use ~DetaProcessor.post_process_object_detection to retrieve the
unnormalized bounding boxes.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer
plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average
in the self-attention heads.
intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder).
intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder).
init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder.
enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are
picked as region proposals in the first stage. Output of bounding box binary classification (i.e.
foreground and background).
enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage.
The DetaForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DetaForObjectDetection
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large")
model = DetaForObjectDetection.from_pretrained("jozhang97/deta-swin-large")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[
... 0
... ]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected cat with confidence 0.683 at location [345.85, 23.68, 639.86, 372.83]
Detected cat with confidence 0.683 at location [8.8, 52.49, 316.93, 473.45]
Detected remote with confidence 0.568 at location [40.02, 73.75, 175.96, 117.33]
Detected remote with confidence 0.546 at location [333.68, 77.13, 370.12, 187.51]
←DeiT
DETR→
DETA
Overview
Resources
DetaConfig
DetaImageProcessor
DetaModel
DetaForObjectDetection
|
CamemBERT
Overview
The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la
Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019. It is a model
trained on 138GB of French text.
The abstract from the paper is the following:
Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available
models have either been trained on English data or on the concatenation of data in multiple languages. This makes
practical use of such models —in all languages except English— very limited. Aiming to address this issue for French,
we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the
performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging,
dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art
for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and
downstream applications for French NLP.
Tips:
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples
as well as the information relative to the inputs and outputs.
This model was contributed by camembert. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
CamembertConfig
class transformers.CamembertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling CamembertModel or TFCamembertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling CamembertModel or TFCamembertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a CamembertModel or a TFCamembertModel. It is
used to instantiate a Camembert model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Camembert
camembert-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CamembertConfig, CamembertModel
# Initializing a Camembert camembert-base style configuration
configuration = CamembertConfig()
# Initializing a model (with random weights) from the camembert-base style configuration
model = CamembertModel(configuration)
# Accessing the model configuration
configuration = model.config
CamembertTokenizer
class transformers.CamembertTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
additional_special_tokens = ['<s>NOTUSED', '</s>NOTUSED']
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from RobertaTokenizer and XLNetTokenizer. Construct a CamemBERT tokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An CamemBERT sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. CamemBERT, like
RoBERTa, does not make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
CamembertTokenizerFast
class transformers.CamembertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
additional_special_tokens = ['<s>NOTUSED', '</s>NOTUSED']
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
Construct a “fast” CamemBERT tokenizer (backed by HuggingFace’s tokenizers library). Adapted from
RobertaTokenizer and XLNetTokenizer. Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An CamemBERT sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. CamemBERT, like
RoBERTa, does not make use of token type ids, therefore a list of zeros is returned.
CamembertModel
class transformers.CamembertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as a decoder the model needs to be initialized with the is_decoder argument of the configuration set to
True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The CamembertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CamembertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = CamembertModel.from_pretrained("camembert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
CamembertForCausalLM
class transformers.CamembertForCausalLM
<
source
>
(
config
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The CamembertForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CamembertForCausalLM, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
config = AutoConfig.from_pretrained("camembert-base")
config.is_decoder = True
model = CamembertForCausalLM.from_pretrained("camembert-base", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
CamembertForMaskedLM
class transformers.CamembertForMaskedLM
<
source
>
(
config
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CamembertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CamembertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = CamembertForMaskedLM.from_pretrained("camembert-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.1
CamembertForSequenceClassification
class transformers.CamembertForSequenceClassification
<
source
>
(
config
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CamembertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, CamembertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = CamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'optimism'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = CamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.08
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, CamembertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = CamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = CamembertForSequenceClassification.from_pretrained(
... "cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
CamembertForMultipleChoice
class transformers.CamembertForMultipleChoice
<
source
>
(
config
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CamembertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CamembertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = CamembertForMultipleChoice.from_pretrained("camembert-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
CamembertForTokenClassification
class transformers.CamembertForTokenClassification
<
source
>
(
config
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CamembertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CamembertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = CamembertForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.01
CamembertForQuestionAnswering
class transformers.CamembertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CamembertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CamembertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CamembertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
model = CamembertForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
' puppet'
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
0.86
TFCamembertModel
class transformers.TFCamembertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFCamembertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = TFCamembertModel.from_pretrained("camembert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFCamembertForCasualLM
class transformers.TFCamembertForCausalLM
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a language modeling head on top for CLM fine-tuning.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The TFCamembertForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = TFCamembertForCausalLM.from_pretrained("camembert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFCamembertForMaskedLM
class transformers.TFCamembertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCamembertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = TFCamembertForMaskedLM.from_pretrained("camembert-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-<mask> tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
0.1
TFCamembertForSequenceClassification
class transformers.TFCamembertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCamembertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
model = TFCamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'optimism'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFCamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
0.08
TFCamembertForMultipleChoice
class transformers.TFCamembertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCamembertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = TFCamembertForMultipleChoice.from_pretrained("camembert-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFCamembertForTokenClassification
class transformers.TFCamembertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCamembertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-large-ner-english")
model = TFCamembertForTokenClassification.from_pretrained("ydshieh/roberta-large-ner-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_tokens_classes
['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC']
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
round(float(loss), 2)
0.01
TFCamembertForQuestionAnswering
class transformers.TFCamembertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (CamembertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CamemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CamembertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCamembertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCamembertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-base-squad2")
model = TFCamembertForQuestionAnswering.from_pretrained("ydshieh/roberta-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
' puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
0.86
←ByT5
CANINE→
CamemBERT
Overview
Documentation resources
CamembertConfig
CamembertTokenizer
CamembertTokenizerFast
CamembertModel
CamembertForCausalLM
CamembertForMaskedLM
CamembertForSequenceClassification
CamembertForMultipleChoice
CamembertForTokenClassification
CamembertForQuestionAnswering
TFCamembertModel
TFCamembertForCasualLM
TFCamembertForMaskedLM
TFCamembertForSequenceClassification
TFCamembertForMultipleChoice
TFCamembertForTokenClassification
TFCamembertForQuestionAnswering
|
MPNet
Overview
The MPNet model was proposed in MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
masked language modeling and permuted language modeling for natural language understanding.
The abstract from the paper is the following:
BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
BERT, XLNet, RoBERTa) under the same model setting.
Tips:
MPNet doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. just
separate your segments with the separation token tokenizer.sep_token (or [sep]).
The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
MPNetConfig
class transformers.MPNetConfig
<
source
>
(
vocab_size = 30527
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
initializer_range = 0.02
layer_norm_eps = 1e-12
relative_attention_num_buckets = 32
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30527) —
Vocabulary size of the MPNet model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MPNetModel or TFMPNetModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
This is the configuration class to store the configuration of a MPNetModel or a TFMPNetModel. It is used to
instantiate a MPNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MPNet
microsoft/mpnet-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import MPNetModel, MPNetConfig
# Initializing a MPNet mpnet-base style configuration
configuration = MPNetConfig()
# Initializing a model from the mpnet-base style configuration
model = MPNetModel(configuration)
# Accessing the model configuration
configuration = model.config
MPNetTokenizer
class transformers.MPNetTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '[UNK]'
pad_token = '<pad>'
mask_token = '<mask>'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
This tokenizer inherits from BertTokenizer which contains most of the methods. Users should refer to the
superclass for more information regarding methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A MPNet sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Set to True if the token list is already formatted with special tokens for the model
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. MPNet does not
make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
MPNetTokenizerFast
class transformers.MPNetTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '[UNK]'
pad_token = '<pad>'
mask_token = '<mask>'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a “fast” MPNet tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs
Returns
List[int]
List of zeros.
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. MPNet does not
make use of token type ids, therefore a list of zeros is returned
MPNetModel
class transformers.MPNetModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MPNet Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MPNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MPNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MPNetModel
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetModel.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MPNetForMaskedLM
class transformers.MPNetForMaskedLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MPNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MPNetForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MPNetForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetForMaskedLM.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
MPNetForSequenceClassification
class transformers.MPNetForSequenceClassification
<
source
>
(
config
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MPNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MPNetForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, MPNetForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetForSequenceClassification.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MPNetForSequenceClassification.from_pretrained("microsoft/mpnet-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, MPNetForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetForSequenceClassification.from_pretrained("microsoft/mpnet-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MPNetForSequenceClassification.from_pretrained(
... "microsoft/mpnet-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
MPNetForMultipleChoice
class transformers.MPNetForMultipleChoice
<
source
>
(
config
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MPNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MPNetForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MPNetForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetForMultipleChoice.from_pretrained("microsoft/mpnet-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
MPNetForTokenClassification
class transformers.MPNetForTokenClassification
<
source
>
(
config
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MPNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MPNetForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MPNetForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetForTokenClassification.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
MPNetForQuestionAnswering
class transformers.MPNetForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MPNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MPNetForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MPNetForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = MPNetForQuestionAnswering.from_pretrained("microsoft/mpnet-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFMPNetModel
class transformers.TFMPNetModel
<
source
>
(
*args
**kwargs
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MPNet Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: Optional[Union[np.array, tf.Tensor]] = None
position_ids: Optional[Union[np.array, tf.Tensor]] = None
head_mask: Optional[Union[np.array, tf.Tensor]] = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MPNetConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMPNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMPNetModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = TFMPNetModel.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFMPNetForMaskedLM
class transformers.TFMPNetForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MPNetConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMPNetForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMPNetForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = TFMPNetForMaskedLM.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFMPNetForSequenceClassification
class transformers.TFMPNetForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: Optional[Union[np.array, tf.Tensor]] = None
position_ids: Optional[Union[np.array, tf.Tensor]] = None
head_mask: Optional[Union[np.array, tf.Tensor]] = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MPNetConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMPNetForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMPNetForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = TFMPNetForSequenceClassification.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFMPNetForSequenceClassification.from_pretrained("microsoft/mpnet-base", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFMPNetForMultipleChoice
class transformers.TFMPNetForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MPNetConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMPNetForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMPNetForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = TFMPNetForMultipleChoice.from_pretrained("microsoft/mpnet-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFMPNetForTokenClassification
class transformers.TFMPNetForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MPNetConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMPNetForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMPNetForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = TFMPNetForTokenClassification.from_pretrained("microsoft/mpnet-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFMPNetForQuestionAnswering
class transformers.TFMPNetForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (MPNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MPNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: Optional[Union[np.array, tf.Tensor]] = None
position_ids: Optional[Union[np.array, tf.Tensor]] = None
head_mask: Optional[Union[np.array, tf.Tensor]] = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: tf.Tensor | None = None
end_positions: tf.Tensor | None = None
training: bool = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MPNetConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFMPNetForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMPNetForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/mpnet-base")
model = TFMPNetForQuestionAnswering.from_pretrained("microsoft/mpnet-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←MobileBERT
MRA→
MPNet
Overview
Documentation resources
MPNetConfig
MPNetTokenizer
MPNetTokenizerFast
MPNetModel
MPNetForMaskedLM
MPNetForSequenceClassification
MPNetForMultipleChoice
MPNetForTokenClassification
MPNetForQuestionAnswering
TFMPNetModel
TFMPNetForMaskedLM
TFMPNetForSequenceClassification
TFMPNetForMultipleChoice
TFMPNetForTokenClassification
TFMPNetForQuestionAnswering
|
Speech2Text
Overview
The Speech2Text model was proposed in fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It’s a
transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST:
LibriSpeech, CoVoST 2, MuST-C.
This model was contributed by valhalla. The original code can be found here.
Inference
Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech
signal. It’s a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The
generate() method can be used for inference.
The Speech2TextFeatureExtractor class is responsible for extracting the log-mel filter-bank
features. The Speech2TextProcessor wraps Speech2TextFeatureExtractor and
Speech2TextTokenizer into a single instance to both extract the input features and decode the
predicted token ids.
The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to
install those packages before running the examples. You could either install those as extra speech dependencies with
pip install transformers"[speech, sentencepiece]" or install the packages separately with pip install torchaudio sentencepiece. Also torchaudio requires the development version of the libsndfile package which can be installed via a system package manager. On Ubuntu it can
be installed as follows: apt install libsndfile1-dev
ASR and Speech Translation
Copied
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcription
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
Multilingual speech translation
For multilingual speech translation models, eos_token_id is used as the decoder_start_token_id and
the target language id is forced as the first generated token. To force the target language id as the first
generated token, pass the forced_bos_token_id parameter to the generate() method. The following
example shows how to transate English speech to French text using the facebook/s2t-medium-mustc-multilingual-st
checkpoint.
Copied
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(
... inputs["input_features"],
... attention_mask=inputs["attention_mask"],
... forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"],
... )
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
translation
["(Vidéo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'être accueillis dans son évangile."]
See the model hub to look for Speech2Text checkpoints.
Speech2TextConfig
class transformers.Speech2TextConfig
<
source
>
(
vocab_size = 10000
encoder_layers = 12
encoder_ffn_dim = 2048
encoder_attention_heads = 4
decoder_layers = 6
decoder_ffn_dim = 2048
decoder_attention_heads = 4
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 2
scale_embedding = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
max_source_positions = 6000
max_target_positions = 1024
num_conv_layers = 2
conv_kernel_sizes = (5, 5)
conv_channels = 1024
input_feat_per_channel = 80
input_channels = 1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the Speech2Text model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling Speech2TextModel
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
max_source_positions (int, optional, defaults to 6000) —
The maximum sequence length of log-mel filter-bank features that this model might ever be used with.
max_target_positions (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
num_conv_layers (int, optional, defaults to 2) —
Number of 1D convolutional layers in the conv module.
conv_kernel_sizes (Tuple[int], optional, defaults to (5, 5)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the conv module. The length
of conv_kernel_sizes has to match num_conv_layers.
conv_channels (int, optional, defaults to 1024) —
An integer defining the number of output channels of each convolution layers except the final one in the
conv module.
input_feat_per_channel (int, optional, defaults to 80) —
An integer specifying the size of feature vector. This is also the dimensions of log-mel filter-bank
features.
input_channels (int, optional, defaults to 1) —
An integer specifying number of input channels of the input feature vector.
This is the configuration class to store the configuration of a Speech2TextModel. It is used to instantiate an
Speech2Text model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Speech2Text
facebook/s2t-small-librispeech-asr architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Speech2TextConfig, Speech2TextModel
# Initializing a Speech2Text s2t_transformer_s style configuration
configuration = Speech2TextConfig()
# Initializing a model (with random weights) from the s2t_transformer_s style configuration
model = Speech2TextModel(configuration)
# Accessing the model configuration
configuration = model.config
Speech2TextTokenizer
class transformers.Speech2TextTokenizer
<
source
>
(
vocab_file
spm_file
bos_token = '<s>'
eos_token = '</s>'
pad_token = '<pad>'
unk_token = '<unk>'
do_upper_case = False
do_lower_case = False
tgt_lang = None
lang_codes = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
spm_file (str) —
Path to the SentencePiece model file
bos_token (str, optional, defaults to "<s>") —
The beginning of sentence token.
eos_token (str, optional, defaults to "</s>") —
The end of sentence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
do_upper_case (bool, optional, defaults to False) —
Whether or not to uppercase the output when decoding.
do_lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
tgt_lang (str, optional) —
A string representing the target language.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
**kwargs —
Additional keyword arguments passed along to PreTrainedTokenizer
Construct an Speech2Text tokenizer.
This tokenizer inherits from PreTrainedTokenizer which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
Build model inputs from a sequence by appending eos_token_id.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
Speech2TextFeatureExtractor
class transformers.Speech2TextFeatureExtractor
<
source
>
(
feature_size = 80
sampling_rate = 16000
num_mel_bins = 80
padding_value = 0.0
do_ceptral_normalize = True
normalize_means = True
normalize_vars = True
**kwargs
)
Parameters
feature_size (int, defaults to 80) —
The feature dimension of the extracted features.
sampling_rate (int, defaults to 16000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
num_mel_bins (int, defaults to 80) —
Number of Mel-frequency bins.
padding_value (float, defaults to 0.0) —
The value that is used to fill the padding vectors.
do_ceptral_normalize (bool, optional, defaults to True) —
Whether or not to apply utterance-level cepstral mean and variance normalization to extracted features.
normalize_means (bool, optional, defaults to True) —
Whether or not to zero-mean normalize the extracted features.
normalize_vars (bool, optional, defaults to True) —
Whether or not to unit-variance normalize the extracted features.
Constructs a Speech2Text feature extractor.
This feature extractor inherits from Speech2TextFeatureExtractor which contains most of the main methods. Users
should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech using TorchAudio and applies utterance-level cepstral
mean and variance normalization to the extracted features.
__call__
<
source
>
(
raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
max_length: typing.Optional[int] = None
truncation: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
sampling_rate: typing.Optional[int] = None
return_attention_mask: typing.Optional[bool] = None
**kwargs
)
Parameters
raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
padding (bool, str or PaddingStrategy, optional, defaults to True) —
Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
max_length (int, optional) —
Maximum length of the returned list and optionally padding length (see above).
truncation (bool) —
Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor’s default.
What are attention masks?
For Speech2TextTransformer models, attention_mask should always be passed for batched inference, to
avoid subtle bugs.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
sampling_rate (int, optional) —
The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors.
padding_value (float, defaults to 0.0) —
The value that is used to fill the padding values / vectors.
Main method to featurize and prepare for the model one or several sequence(s).
Speech2TextProcessor
class transformers.Speech2TextProcessor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (Speech2TextFeatureExtractor) —
An instance of Speech2TextFeatureExtractor. The feature extractor is a required input.
tokenizer (Speech2TextTokenizer) —
An instance of Speech2TextTokenizer. The tokenizer is a required input.
Constructs a Speech2Text processor which wraps a Speech2Text feature extractor and a Speech2Text tokenizer into a
single processor.
Speech2TextProcessor offers all the functionalities of Speech2TextFeatureExtractor and
Speech2TextTokenizer. See the call() and decode() for more
information.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to Speech2TextFeatureExtractor’s
call() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to Speech2TextTokenizer’s
call(). Please refer to the doctsring of the above two methods for more
information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to Speech2TextTokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to Speech2TextTokenizer’s decode(). Please refer
to the docstring of this method for more information.
Speech2TextModel
class transformers.Speech2TextModel
<
source
>
(
config: Speech2TextConfig
)
Parameters
config (Speech2TextConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Speech2Text Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_features: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, sequence_length, feature_size)) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g.
via the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the fbank features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using SpeechToTextTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechToText uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_speech_to_text._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Speech2TextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The Speech2TextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import Speech2TextModel, AutoFeatureExtractor
from datasets import load_dataset
model = Speech2TextModel.from_pretrained("facebook/s2t-small-librispeech-asr")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = feature_extractor(
... ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt"
... )
input_features = inputs.input_features
decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id
last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state
list(last_hidden_state.shape)
[1, 2, 256]
Speech2TextForConditionalGeneration
class transformers.Speech2TextForConditionalGeneration
<
source
>
(
config: Speech2TextConfig
)
Parameters
config (Speech2TextConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Speech2Text Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_features: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, sequence_length, feature_size)) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g.
via the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the fbank features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using SpeechToTextTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechToText uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_speech_to_text._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is
only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Speech2TextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The Speech2TextForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(
... ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt"
... )
input_features = inputs.input_features
generated_ids = model.generate(inputs=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
transcription
'mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'
TFSpeech2TextModel
class transformers.TFSpeech2TextModel
<
source
>
(
*args
**kwargs
)
Parameters
config (Speech2TextConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Speech2Text Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_features: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_features (tf.Tensor of shape (batch_size, sequence_length, feature_size)) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g.
via the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the fbank features, padding and conversion into a
tensor of floats. See call()
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using Speech2TextTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechToText uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (tf.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Speech2TextConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFSpeech2TextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFSpeech2TextModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/s2t-small-librispeech-asr")
model = TFSpeech2TextModel.from_pretrained("facebook/s2t-small-librispeech-asr")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFSpeech2TextForConditionalGeneration
class transformers.TFSpeech2TextForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (Speech2TextConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Speech2Text Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_features: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_features (tf.Tensor of shape (batch_size, sequence_length, feature_size)) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g.
via the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the fbank features, padding and conversion into a
tensor of floats. See call()
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using Speech2TextTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechToText uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (tf.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Speech2TextConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFSpeech2TextForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import Speech2TextProcessor, TFSpeech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = TFSpeech2TextForConditionalGeneration.from_pretrained(
... "facebook/s2t-small-librispeech-asr", from_pt=True
... )
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
ds.set_format(type="tf")
input_features = processor(
... ds["speech"][0], sampling_rate=16000, return_tensors="tf"
... ).input_features # Batch size 1
generated_ids = model.generate(input_features)
transcription = processor.batch_decode(generated_ids)
←SEW-D
Speech2Text2→
Speech2Text
Overview
Inference
Speech2TextConfig
Speech2TextTokenizer
Speech2TextFeatureExtractor
Speech2TextProcessor
Speech2TextModel
Speech2TextForConditionalGeneration
TFSpeech2TextModel
TFSpeech2TextForConditionalGeneration
|
SEW
Overview
SEW (Squeezed and Efficient Wav2Vec) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training
for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q.
Weinberger, Yoav Artzi.
The abstract from the paper is the following:
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.
Tips:
SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
Wav2Vec2CTCTokenizer.
This model was contributed by anton-l.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
SEWConfig
class transformers.SEWConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
squeeze_factor = 2
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)
conv_stride = (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)
conv_kernel = (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
ctc_loss_reduction = 'mean'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the SEW model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling SEW.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
squeeze_factor (int, optional, defaults to 2) —
Sequence length downsampling factor after the encoder and upsampling factor after the transformer.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of SEWForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
conv_dim (Tuple[int] or List[int], optional, defaults to (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of SEWForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of SEWForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of Wav2Vec2ForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
This is the configuration class to store the configuration of a SEWModel. It is used to instantiate a SEW model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the SEW
asapp/sew-tiny-100k architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SEWConfig, SEWModel
# Initializing a SEW asapp/sew-tiny-100k style configuration
configuration = SEWConfig()
# Initializing a model (with random weights) from the asapp/sew-tiny-100k style configuration
model = SEWModel(configuration)
# Accessing the model configuration
configuration = model.config
SEWModel
class transformers.SEWModel
<
source
>
(
config: SEWConfig
)
Parameters
config (SEWConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SEW Model transformer outputting raw hidden-states without any specific head on top.
SEW was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SEWConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SEWModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, SEWModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
model = SEWModel.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 512]
SEWForCTC
class transformers.SEWForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (SEWConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SEW Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
SEW was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SEWConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SEWForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, SEWForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
model = SEWForCTC.from_pretrained("asapp/sew-tiny-100k-ft-ls100h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILTER IS THE APPOSTILE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPOLLE'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
0.42
SEWForSequenceClassification
class transformers.SEWForSequenceClassification
<
source
>
(
config
)
Parameters
config (SEWConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SEW Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB
Keyword Spotting.
SEW was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SEWConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SEWForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, SEWForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/sew-mid-100k-ft-keyword-spotting")
model = SEWForSequenceClassification.from_pretrained("anton-l/sew-mid-100k-ft-keyword-spotting")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'_unknown_'
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
round(loss.item(), 2)
9.52
←MusicGen
SEW-D→
SEW
Overview
Documentation resources
SEWConfig
SEWModel
SEWForCTC
SEWForSequenceClassification
|
FLAN-UL2
Overview
Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year.
It was fine tuned using the “Flan” prompt tuning and dataset collection. Similiar to Flan-T5, one can directly use FLAN-UL2 weights without finetuning the model:
According ot the original blog here are the notable improvements:
The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large.
The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning.
The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore.
Google has released the following variants:
One can refer to T5’s documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model.
The original checkpoints can be found here.
Running on low resource devices
The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use device_map="auto" to make sure you don’t have any OOM issue!
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2")
inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic']
Inference
The inference protocol is exaclty the same as any T5 model, please have a look at the T5’s documentation page for more details.
←FLAN-T5
FlauBERT→
FLAN-UL2
Overview
Running on low resource devices
Inference
|
LED
Overview
The LED model was proposed in Longformer: The Long-Document Transformer by Iz
Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting
long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization
dataset.
Tips:
LEDForConditionalGeneration is an extension of
BartForConditionalGeneration exchanging the traditional self-attention layer with
Longformer’s chunked self-attention layer. LEDTokenizer is an alias of
BartTokenizer.
LED works very well on long-range sequence-to-sequence tasks where the input_ids largely exceed a length of
1024 tokens.
LED pads the input_ids to be a multiple of config.attention_window if required. Therefore a small speed-up is
gained, when LEDTokenizer is used with the pad_to_multiple_of argument.
LED makes use of global attention by means of the global_attention_mask (see
LongformerModel). For summarization, it is advised to put global attention only on the first
<s> token. For question answering, it is advised to put global attention on all tokens of the question.
To fine-tune LED on all 16384, gradient checkpointing can be enabled in case training leads to out-of-memory (OOM)
errors. This can be done by executing model.gradient_checkpointing_enable().
Moreover, the use_cache=False
flag can be used to disable the caching mechanism to save memory.
A notebook showing how to evaluate LED, can be accessed here.
A notebook showing how to fine-tune LED, can be accessed here.
LED is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
This model was contributed by patrickvonplaten.
Documentation resources
Text classification task guide
Question answering task guide
Translation task guide
Summarization task guide
LEDConfig
class transformers.LEDConfig
<
source
>
(
vocab_size = 50265
max_encoder_position_embeddings = 16384
max_decoder_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 2
classifier_dropout = 0.0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
attention_window: typing.Union[typing.List[int], int] = 512
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the LED model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LEDModel or TFLEDModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_encoder_position_embeddings (int, optional, defaults to 16384) —
The maximum sequence length that the encoder might ever be used with.
max_decoder_position_embeddings (int, optional, defaults to 16384) —
The maximum sequence length that the decoder might ever be used with.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
This is the configuration class to store the configuration of a LEDModel. It is used to instantiate an LED
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LED
allenai/led-base-16384 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import LEDModel, LEDConfig
# Initializing a LED allenai/led-base-16384 style configuration
configuration = LEDConfig()
# Initializing a model from the allenai/led-base-16384 style configuration
model = LEDModel(configuration)
# Accessing the model configuration
configuration = model.config
LEDTokenizer
class transformers.LEDTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
Constructs a LED tokenizer, which is smilar to the ROBERTa tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import LEDTokenizer
tokenizer = LEDTokenizer.from_pretrained("allenai/led-base-16384")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A LED sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. LED does not
make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LEDTokenizerFast
class transformers.LEDTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (LED tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” LED tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2 tokenizer,
using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import LEDTokenizerFast
tokenizer = LEDTokenizerFast.from_pretrained("allenai/led-base-16384")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. LED does not
make use of token type ids, therefore a list of zeros is returned.
LED specific outputs
class transformers.models.led.modeling_led.LEDEncoderBaseModelOutput
<
source
>
(
last_hidden_state: FloatTensor
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for LEDEncoder’s outputs, with potential hidden states, local and global attentions.
class transformers.models.led.modeling_led.LEDSeq2SeqModelOutput
<
source
>
(
last_hidden_state: FloatTensor = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential
decoding.
class transformers.models.led.modeling_led.LEDSeq2SeqLMOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for sequence-to-sequence language models outputs.
class transformers.models.led.modeling_led.LEDSeq2SeqSequenceClassifierOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) —
Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) —
Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of sequence-to-sequence sentence classification models.
class transformers.models.led.modeling_led.LEDSeq2SeqQuestionAnsweringModelOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
start_logits: FloatTensor = None
end_logits: FloatTensor = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_global_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Span-end scores (before SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x),
where x is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for outputs of sequence-to-sequence question answering models.
class transformers.models.led.modeling_tf_led.TFLEDEncoderBaseModelOutput
<
source
>
(
last_hidden_state: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x + attention_window + 1), where x is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first x values) and to every token in the attention window (remaining `attention_window
1values). Note that the firstxvalues refer to tokens with fixed positions in the text, but the remainingattention_window + 1values refer to tokens with relative positions: the attention weight of a token to itself is located at indexx + attention_window / 2and theattention_window / 2preceding (succeeding) values are the attention weights to theattention_window / 2preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the firstxattention weights. If a token has global attention, the attention weights to all other tokens inattentionsis set to 0, the values should be accessed fromglobal_attentions`.
global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for Longformer’s outputs, with potential hidden states, local and global attentions.
class transformers.models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput
<
source
>
(
last_hidden_state: tf.Tensor = None
past_key_values: List[tf.Tensor] | None = None
decoder_hidden_states: Tuple[tf.Tensor] | None = None
decoder_attentions: Tuple[tf.Tensor] | None = None
cross_attentions: Tuple[tf.Tensor] | None = None
encoder_last_hidden_state: tf.Tensor | None = None
encoder_hidden_states: Tuple[tf.Tensor] | None = None
encoder_attentions: Tuple[tf.Tensor] | None = None
encoder_global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential
decoding.
class transformers.models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput
<
source
>
(
loss: tf.Tensor | None = None
logits: tf.Tensor = None
past_key_values: List[tf.Tensor] | None = None
decoder_hidden_states: Tuple[tf.Tensor] | None = None
decoder_attentions: Tuple[tf.Tensor] | None = None
cross_attentions: Tuple[tf.Tensor] | None = None
encoder_last_hidden_state: tf.Tensor | None = None
encoder_hidden_states: Tuple[tf.Tensor] | None = None
encoder_attentions: Tuple[tf.Tensor] | None = None
encoder_global_attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
Base class for sequence-to-sequence language models outputs.
LEDModel
class transformers.LEDModel
<
source
>
(
config: LEDConfig
)
Parameters
config (LEDConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare LED Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. See the superclass documentation for the generic methods the library
implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
global_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using LedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
LED uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_led._prepare_decoder_inputs and modify
to your needs. See diagram 1 in the paper for more information on the
default strategy.
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention for the encoder.
Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is
important for task-specific finetuning because it makes the model more flexible at representing the task.
For example, for classification, the token should be given global attention. For QA, all question
tokens should also have global attention. Please refer to the Longformer
paper for more details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LEDConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The LEDModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LEDModel
import torch
tokenizer = AutoTokenizer.from_pretrained("allenai/led-base-16384")
model = LEDModel.from_pretrained("allenai/led-base-16384")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
LEDForConditionalGeneration
class transformers.LEDForConditionalGeneration
<
source
>
(
config: LEDConfig
)
Parameters
config (LEDConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The LED Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. See the superclass documentation for the generic methods the library
implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
global_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using LedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
LED uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_led._prepare_decoder_inputs and modify
to your needs. See diagram 1 in the paper for more information on the
default strategy.
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention for the encoder.
Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is
important for task-specific finetuning because it makes the model more flexible at representing the task.
For example, for classification, the token should be given global attention. For QA, all question
tokens should also have global attention. Please refer to the Longformer
paper for more details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LEDConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The LEDForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Conditional generation example:
Copied
from transformers import AutoTokenizer, LEDForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("allenai/led-base-16384")
TXT = "My friends are <mask> but they eat too many carbs."
model = LEDForConditionalGeneration.from_pretrained("allenai/led-base-16384")
input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
prediction = model.generate(input_ids)[0]
print(tokenizer.decode(prediction, skip_special_tokens=True))
Summarization example:
Copied
import torch
from transformers import AutoTokenizer, LEDForConditionalGeneration
model = LEDForConditionalGeneration.from_pretrained("allenai/led-large-16384-arxiv")
tokenizer = AutoTokenizer.from_pretrained("allenai/led-large-16384-arxiv")
ARTICLE_TO_SUMMARIZE = '''Transformers (Vaswani et al., 2017) have achieved state-of-the-art
... results in a wide range of natural language tasks including generative language modeling
... (Dai et al., 2019; Radford et al., 2019) and discriminative ... language understanding (Devlin et al., 2019).
... This success is partly due to the self-attention component which enables the network to capture contextual
... information from the entire sequence. While powerful, the memory and computational requirements of
... self-attention grow quadratically with sequence length, making it infeasible (or very expensive) to
... process long sequences. To address this limitation, we present Longformer, a modified Transformer
... architecture with a self-attention operation that scales linearly with the sequence length, making it
... versatile for processing long documents (Fig 1). This is an advantage for natural language tasks such as
... long document classification, question answering (QA), and coreference resolution, where existing approaches
... partition or shorten the long context into smaller sequences that fall within the typical 512 token limit
... of BERT-style pretrained models. Such partitioning could potentially result in loss of important
... cross-partition information, and to mitigate this problem, existing methods often rely on complex
... architectures to address such interactions. On the other hand, our proposed Longformer is able to build
... contextual representations of the entire context using multiple layers of attention, reducing the need for
... task-specific architectures.'''
inputs = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors="pt")
# Global attention on the first token (cf. Beltagy et al. 2020)
global_attention_mask = torch.zeros_like(inputs)
global_attention_mask[:, 0] = 1
# Generate Summary
summary_ids = model.generate(inputs, global_attention_mask=global_attention_mask, num_beams=3, max_length=32)
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
LEDForSequenceClassification
class transformers.LEDForSequenceClassification
<
source
>
(
config: LEDConfig
**kwargs
)
Parameters
config (LEDConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
LED model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from PreTrainedModel. See the superclass documentation for the generic methods the library
implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
global_attention_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using LedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
LED uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_led._prepare_decoder_inputs and modify
to your needs. See diagram 1 in the paper for more information on the
default strategy.
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention for the encoder.
Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is
important for task-specific finetuning because it makes the model more flexible at representing the task.
For example, for classification, the token should be given global attention. For QA, all question
tokens should also have global attention. Please refer to the Longformer
paper for more details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LEDConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The LEDForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, LEDForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("allenai/led-base-16384")
model = LEDForSequenceClassification.from_pretrained("allenai/led-base-16384")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = LEDForSequenceClassification.from_pretrained("allenai/led-base-16384", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, LEDForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("allenai/led-base-16384")
model = LEDForSequenceClassification.from_pretrained("allenai/led-base-16384", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = LEDForSequenceClassification.from_pretrained(
... "allenai/led-base-16384", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
LEDForQuestionAnswering
class transformers.LEDForQuestionAnswering
<
source
>
(
config
)
Parameters
config (LEDConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
LED Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer
on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. See the superclass documentation for the generic methods the library
implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for general usage and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
global_attention_mask: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using LedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
LED uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_led._prepare_decoder_inputs and modify
to your needs. See diagram 1 in the paper for more information on the
default strategy.
global_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to decide the attention given on each token, local attention or global attention for the encoder.
Tokens with global attention attends to all other tokens, and all other tokens attend to them. This is
important for task-specific finetuning because it makes the model more flexible at representing the task.
For example, for classification, the token should be given global attention. For QA, all question
tokens should also have global attention. Please refer to the Longformer
paper for more details. Mask values selected in [0, 1]:
0 for local attention (a sliding window attention),
1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them).
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LEDConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The LEDForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LEDForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("allenai/led-base-16384")
model = LEDForQuestionAnswering.from_pretrained("allenai/led-base-16384")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFLEDModel
class transformers.TFLEDModel
<
source
>
(
*args
**kwargs
)
Parameters
config (LEDConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LED Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids = None
attention_mask = None
decoder_input_ids = None
decoder_attention_mask = None
head_mask = None
decoder_head_mask = None
encoder_outputs: Optional[Union[Tuple, TFLEDEncoderBaseModelOutput]] = None
global_attention_mask = None
past_key_values = None
inputs_embeds = None
decoder_inputs_embeds = None
use_cache = None
output_attentions = None
output_hidden_states = None
return_dict = None
training = False
**kwargs
)
→
transformers.models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using LedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
LED uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LEDConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLEDModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFLEDModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("allenai/led-base-16384")
model = TFLEDModel.from_pretrained("allenai/led-base-16384")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFLEDForConditionalGeneration
class transformers.TFLEDForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (LEDConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LED Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[TFLEDEncoderBaseModelOutput] = None
global_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
→
transformers.models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using LedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
LED uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LEDConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
encoder_global_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, x), where x
is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
The TFLEDForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFLEDForConditionalGeneration
import tensorflow as tf
mname = "allenai/led-base-16384"
tokenizer = AutoTokenizer.from_pretrained(mname)
TXT = "My friends are <mask> but they eat too many carbs."
model = TFLEDForConditionalGeneration.from_pretrained(mname)
batch = tokenizer([TXT], return_tensors="tf")
logits = model(inputs=batch.input_ids).logits
probs = tf.nn.softmax(logits[0])
# probs[5] is associated with the mask token
←Jukebox
LLaMA→
LED
Overview
Documentation resources
LEDConfig
LEDTokenizer
LEDTokenizerFast
LED specific outputs
LEDModel
LEDForConditionalGeneration
LEDForSequenceClassification
LEDForQuestionAnswering
TFLEDModel
TFLEDForConditionalGeneration
|
FLAN-T5
Overview
FLAN-T5 was released in the paper Scaling Instruction-Finetuned Language Models - it is an enhanced version of T5 that has been finetuned in a mixture of tasks.
One can directly use FLAN-T5 weights without finetuning the model:
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Pour a cup of bolognese into a large bowl and add the pasta']
FLAN-T5 includes the same improvements as T5 version 1.1 (see here for the full details of the model’s improvements.)
Google has released the following variants:
google/flan-t5-small
google/flan-t5-base
google/flan-t5-large
google/flan-t5-xl
google/flan-t5-xxl.
One can refer to T5’s documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model.
The original checkpoints can be found here.
←ESM
FLAN-UL2→
FLAN-T5
Overview
|
MarianMT
Bugs: If you see something strange, file a Github Issue
and assign @patrickvonplaten.
Translations should be similar, but not identical to output in the test set linked to in each model card.
Tips:
A framework for translation models, using the same models as BART.
Implementation Notes
Each model is about 298 MB on disk, there are more than 1,000 models.
The list of supported language pairs can be found here.
Models were originally trained by Jörg Tiedemann using the Marian C++ library, which supports fast training and translation.
All models are transformer encoder-decoders with 6 layers in each component. Each model’s performance is documented
in a model card.
The 80 opus models that require BPE preprocessing are not supported.
The modeling code is the same as BartForConditionalGeneration with a few minor modifications:
static (sinusoid) positional embeddings (MarianConfig.static_position_embeddings=True)
no layernorm_embedding (MarianConfig.normalize_embedding=False)
the model starts generating with pad_token_id (which has 0 as a token_embedding) as the prefix (Bart uses
<s/>),
Code to bulk convert models can be found in convert_marian_to_pytorch.py.
This model was contributed by sshleifer.
Naming
All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}
The language codes used to name models are inconsistent. Two digit codes can usually be found here, three digit codes require googling “language
code {code}“.
Codes formatted like es_AR are usually code_{region}. That one is Spanish from Argentina.
The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second
group use a combination of ISO-639-5 codes and ISO-639-2 codes.
Examples
Since Marian models are smaller than many other translation models available in the library, they can be useful for
fine-tuning experiments and integration tests.
Fine-tune on GPU
Multilingual Models
All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}:
If a model can output multiple languages, and you should specify a language code by prepending the desired output
language to the src_text.
You can see a models’s supported language codes in its model card, under target constituents, like in opus-mt-en-roa.
Note that if a model is only multilingual on the source side, like Helsinki-NLP/opus-mt-roa-en, no language
codes are required.
New multi-lingual models from the Tatoeba-Challenge repo
require 3 character language codes:
Copied
from transformers import MarianMTModel, MarianTokenizer
src_text = [
... ">>fra<< this is a sentence in english that we want to translate to french",
... ">>por<< This should go to portuguese",
... ">>esp<< And this to Spanish",
... ]
model_name = "Helsinki-NLP/opus-mt-en-roa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
print(tokenizer.supported_language_codes)
['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<']
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
["c'est une phrase en anglais que nous voulons traduire en français",
'Isto deve ir para o português.',
'Y esto al español']
Here is the code to see all available pretrained models on the hub:
Copied
from huggingface_hub import list_models
model_list = list_models()
org = "Helsinki-NLP"
model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)]
suffix = [x.split("/")[1] for x in model_ids]
old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()]
Old Style Multi-Lingual Models
These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language
group:
Copied
['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU',
'Helsinki-NLP/opus-mt-ROMANCE-en',
'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA',
'Helsinki-NLP/opus-mt-de-ZH',
'Helsinki-NLP/opus-mt-en-CELTIC',
'Helsinki-NLP/opus-mt-en-ROMANCE',
'Helsinki-NLP/opus-mt-es-NORWAY',
'Helsinki-NLP/opus-mt-fi-NORWAY',
'Helsinki-NLP/opus-mt-fi-ZH',
'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI',
'Helsinki-NLP/opus-mt-sv-NORWAY',
'Helsinki-NLP/opus-mt-sv-ZH']
GROUP_MEMBERS = {
'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],
'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],
'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']
}
Example of translating english to many romance languages, using old-style 2 character language codes
Copied
from transformers import MarianMTModel, MarianTokenizer
src_text = [
... ">>fr<< this is a sentence in english that we want to translate to french",
... ">>pt<< This should go to portuguese",
... ">>es<< And this to Spanish",
... ]
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
["c'est une phrase en anglais que nous voulons traduire en français",
'Isto deve ir para o português.',
'Y esto al español']
Documentation resources
Translation task guide
Summarization task guide
Causal language modeling task guide
MarianConfig
class transformers.MarianConfig
<
source
>
(
vocab_size = 58101
decoder_vocab_size = None
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 58100
scale_embedding = False
pad_token_id = 58100
eos_token_id = 0
forced_eos_token_id = 0
share_encoder_decoder_embeddings = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 58101) —
Vocabulary size of the Marian model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MarianModel or TFMarianModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 0) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a MarianModel. It is used to instantiate an
Marian model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Marian
Helsinki-NLP/opus-mt-en-de architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import MarianModel, MarianConfig
# Initializing a Marian Helsinki-NLP/opus-mt-en-de style configuration
configuration = MarianConfig()
# Initializing a model from the Helsinki-NLP/opus-mt-en-de style configuration
model = MarianModel(configuration)
# Accessing the model configuration
configuration = model.config
MarianTokenizer
class transformers.MarianTokenizer
<
source
>
(
source_spm
target_spm
vocab
target_vocab_file = None
source_lang = None
target_lang = None
unk_token = '<unk>'
eos_token = '</s>'
pad_token = '<pad>'
model_max_length = 512
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
separate_vocabs = False
**kwargs
)
Parameters
source_spm (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary for the source language.
target_spm (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary for the target language.
source_lang (str, optional) —
A string representing the source language.
target_lang (str, optional) —
A string representing the target language.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
model_max_length (int, optional, defaults to 512) —
The maximum sentence length the model accepts.
additional_special_tokens (List[str], optional, defaults to ["<eop>", "<eod>"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct a Marian tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Examples:
Copied
from transformers import MarianForCausalLM, MarianTokenizer
model = MarianForCausalLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
src_texts = ["I am a small frog.", "Tom asked his teacher for advice."]
tgt_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] # optional
inputs = tokenizer(src_texts, text_target=tgt_texts, return_tensors="pt", padding=True)
outputs = model(**inputs) # should work
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
Build model inputs from a sequence by appending eos_token_id.
MarianModel
class transformers.MarianModel
<
source
>
(
config: MarianConfig
)
Parameters
config (MarianConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Marian Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Union[typing.Tuple[torch.Tensor], transformers.modeling_outputs.BaseModelOutput, NoneType] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Marian uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarianConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MarianModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MarianModel
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = MarianModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")
inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
decoder_inputs = tokenizer(
... "<pad> Studien haben gezeigt dass es hilfreich ist einen Hund zu besitzen",
... return_tensors="pt",
... add_special_tokens=False,
... )
outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 26, 512]
MarianMTModel
class transformers.MarianMTModel
<
source
>
(
config: MarianConfig
)
Parameters
config (MarianConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Marian Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Union[typing.Tuple[torch.Tensor], transformers.modeling_outputs.BaseModelOutput, NoneType] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Marian uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarianConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MarianMTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Pytorch version of marian-nmt’s transformer.h (c++). Designed for the OPUS-NMT translation checkpoints. Available
models are listed here.
Examples:
Copied
from transformers import AutoTokenizer, MarianMTModel
src = "fr" # source language
trg = "en" # target language
model_name = f"Helsinki-NLP/opus-mt-{src}-{trg}"
model = MarianMTModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
sample_text = "où est l'arrêt de bus ?"
batch = tokenizer([sample_text], return_tensors="pt")
generated_ids = model.generate(**batch)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
"Where's the bus stop?"
MarianForCausalLM
class transformers.MarianForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarianConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, MarianForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-fr-en")
model = MarianForCausalLM.from_pretrained("Helsinki-NLP/opus-mt-fr-en", add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
TFMarianModel
class transformers.TFMarianModel
<
source
>
(
*args
**kwargs
)
Parameters
config (MarianConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MARIAN Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: tf.Tensor | None = None
past_key_values: Tuple[Tuple[tf.Tensor]] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Marian uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MarianConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFMarianModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMarianModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = TFMarianModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFMarianMTModel
class transformers.TFMarianMTModel
<
source
>
(
*args
**kwargs
)
Parameters
config (MarianConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The MARIAN Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[TFBaseModelOutput] = None
past_key_values: Tuple[Tuple[tf.Tensor]] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Marian uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MarianConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFMarianMTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TF version of marian-nmt’s transformer.h (c++). Designed for the OPUS-NMT translation checkpoints. Available
models are listed here.
Examples:
Copied
from transformers import AutoTokenizer, TFMarianMTModel
from typing import List
src = "fr" # source language
trg = "en" # target language
sample_text = "où est l'arrêt de bus ?"
model_name = f"Helsinki-NLP/opus-mt-{src}-{trg}"
model = TFMarianMTModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
batch = tokenizer([sample_text], return_tensors="tf")
gen = model.generate(**batch)
tokenizer.batch_decode(gen, skip_special_tokens=True)
"Where is the bus stop ?"
FlaxMarianModel
class transformers.FlaxMarianModel
<
source
>
(
config: MarianConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (MarianConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Marian Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarianConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxMarianPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxMarianModel
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = FlaxMarianModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxMarianMTModel
class transformers.FlaxMarianMTModel
<
source
>
(
config: MarianConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (MarianConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The MARIAN Model with a language modeling head. Can be used for translation.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarianConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxMarianPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxMarianMTModel
model = FlaxMarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
text = "My friends are cool but they eat too many carbs."
input_ids = tokenizer(text, max_length=64, return_tensors="jax").input_ids
sequences = model.generate(input_ids, max_length=64, num_beams=2).sequences
outputs = tokenizer.batch_decode(sequences, skip_special_tokens=True)
# should give *Meine Freunde sind cool, aber sie essen zu viele Kohlenhydrate.*
←M2M100
MarkupLM→
MarianMT
Implementation Notes
Naming
Examples
Multilingual Models
Old Style Multi-Lingual Models
Documentation resources
MarianConfig
MarianTokenizer
MarianModel
MarianMTModel
MarianForCausalLM
TFMarianModel
TFMarianMTModel
FlaxMarianModel
FlaxMarianMTModel
|