text
stringlengths 0
51.9k
|
---|
VisionTextDualEncoder
Overview
The [VisionTextDualEncoderModel] can be used to initialize a vision-text dual encoder model with
any pretrained vision autoencoding model as the vision encoder (e.g. ViT, BEiT, DeiT) and any pretrained text autoencoding model as the text encoder (e.g. RoBERTa, BERT). Two projection layers are added on top of both the vision and text encoder to project the output embeddings
to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a
downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text
training and then can be used for zero-shot vision tasks such image-classification or retrieval.
In LiT: Zero-Shot Transfer with Locked-image Text Tuning it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on
new zero-shot vision tasks such as image classification or retrieval.
VisionTextDualEncoderConfig
[[autodoc]] VisionTextDualEncoderConfig
VisionTextDualEncoderProcessor
[[autodoc]] VisionTextDualEncoderProcessor
VisionTextDualEncoderModel
[[autodoc]] VisionTextDualEncoderModel
- forward
FlaxVisionTextDualEncoderModel
[[autodoc]] FlaxVisionTextDualEncoderModel
- call
TFVisionTextDualEncoderModel
[[autodoc]] TFVisionTextDualEncoderModel
- call
|
NLLB-MOE
Overview
The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussร , James Cross, Onur รelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmรกn, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the
200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by
first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed
at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of
Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training
improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using
a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety.
Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.
This model was contributed by Arthur Zucker.
The original code can be found here.
Usage tips
M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE
The NLLB-MoE is very similar to the NLLB model, but it's feed forward layer is based on the implementation of SwitchTransformers.
The tokenizer is the same as the NLLB models.
Implementation differences with SwitchTransformers
The biggest difference is the way the tokens are routed. NLLB-MoE uses a top-2-gate which means that for each input, only the top two experts are selected based on the
highest predicted probabilities from the gating network, and the remaining experts are ignored. In SwitchTransformers, only the top-1 probabilities are computed,
which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, SwitchTransformers still adds its unmodified hidden
states (kind of like a residual connection) while they are masked in NLLB's top-2 routing mechanism.
Generating with NLLB-MoE
The available checkpoints require around 350GB of storage. Make sure to use accelerate if you do not have enough RAM on your machine.
While generating the target text set the forced_bos_token_id to the target language id. The following
example shows how to translate English to French using the facebook/nllb-200-distilled-600M model.
Note that we're using the BCP-47 code for French fra_Latn. See here
for the list of all BCP-47 in the Flores 200 dataset.
thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
article = "Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage."
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=50
)
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la sociรฉtรฉ avait commencรฉ lorsque sa sonnette n'รฉtait pas audible depuis son magasin dans son garage."
Generating from any other language than English
English (eng_Latn) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language,
you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization.
See example below for a translation from romanian to german:
thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b", src_lang="ron_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b")
article = "ลeful ONU spune cฤ nu existฤ o soluลฃie militarฤ รฎn Siria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30
)
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
Resources
Translation task guide
Summarization task guide
NllbMoeConfig
[[autodoc]] NllbMoeConfig
NllbMoeTop2Router
[[autodoc]] NllbMoeTop2Router
- route_tokens
- forward
NllbMoeSparseMLP
[[autodoc]] NllbMoeSparseMLP
- forward
NllbMoeModel
[[autodoc]] NllbMoeModel
- forward
NllbMoeForConditionalGeneration
[[autodoc]] NllbMoeForConditionalGeneration
- forward |
GPTSAN-japanese
Overview
The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama).
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM
in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can
fine-tune for translation or summarization.
Usage example
The generate() method can be used to generate text using GPTSAN-Japanese model.
thon
from transformers import AutoModel, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda()
x_tok = tokenizer("ใฏใ", prefix_text="็น็ฐไฟก้ท", return_tensors="pt")
torch.manual_seed(0)
gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20)
tokenizer.decode(gen_tok[0])
'็น็ฐไฟก้ทใฏใ2004ๅนดใซใๆฆๅฝBASARAใใฎใใใซใ่ฑ่ฃ็งๅ'
GPTSAN Features
GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models.
The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text.
GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details.
Prefix-LM Model
GPTSAN has the structure of the model named Prefix-LM in the T5 paper. (The original GPTSAN repository calls it hybrid)
In GPTSAN, the Prefix part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length.
Arbitrary lengths can also be specified differently for each batch.
This length applies to the text entered in prefix_text for the tokenizer.
The tokenizer returns the mask of the Prefix part of Prefix-LM as token_type_ids.
The model treats the part where token_type_ids is 1 as a Prefix part, that is, the input can refer to both tokens before and after.
Usage tips
Specifying the Prefix part is done with a mask passed to self-attention.
When token_type_ids=None or all zero, it is equivalent to regular causal mask
for example:
x_token = tokenizer("๏ฝฑ๏ฝฒ๏ฝณ๏ฝด")
input_ids: | SOT | SEG | ๏ฝฑ | ๏ฝฒ | ๏ฝณ | ๏ฝด |
token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 0 0 0 0 0 |
SEG | 1 1 0 0 0 0 |
๏ฝฑ | 1 1 1 0 0 0 |
๏ฝฒ | 1 1 1 1 0 0 |
๏ฝณ | 1 1 1 1 1 0 |
๏ฝด | 1 1 1 1 1 1 |
x_token = tokenizer("", prefix_text="๏ฝฑ๏ฝฒ๏ฝณ๏ฝด")
input_ids: | SOT | ๏ฝฑ | ๏ฝฒ | ๏ฝณ | ๏ฝด | SEG |
token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
prefix_lm_mask:
SOT | 1 1 1 1 1 0 |
๏ฝฑ | 1 1 1 1 1 0 |
๏ฝฒ | 1 1 1 1 1 0 |
๏ฝณ | 1 1 1 1 1 0 |
๏ฝด | 1 1 1 1 1 0 |
SEG | 1 1 1 1 1 1 |
x_token = tokenizer("๏ฝณ๏ฝด", prefix_text="๏ฝฑ๏ฝฒ")
input_ids: | SOT | ๏ฝฑ | ๏ฝฒ | SEG | ๏ฝณ | ๏ฝด |
token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
prefix_lm_mask:
SOT | 1 1 1 0 0 0 |
๏ฝฑ | 1 1 1 0 0 0 |
๏ฝฒ | 1 1 1 0 0 0 |
SEG | 1 1 1 1 0 0 |
๏ฝณ | 1 1 1 1 1 0 |
๏ฝด | 1 1 1 1 1 1 |
Spout Vector
A Spout Vector is a special vector for controlling text generation.
This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens.
In the pre-trained model published from Tanrei/GPTSAN-japanese, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention.
The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions.
GPTSanJapaneseConfig
[[autodoc]] GPTSanJapaneseConfig
GPTSanJapaneseTokenizer
[[autodoc]] GPTSanJapaneseTokenizer
GPTSanJapaneseModel
[[autodoc]] GPTSanJapaneseModel
GPTSanJapaneseForConditionalGeneration
[[autodoc]] GPTSanJapaneseForConditionalGeneration
- forward |
Neighborhood Attention Transformer
Overview
NAT was proposed in Neighborhood Attention Transformer
by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern.
The abstract from the paper is the following:
*We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision.
NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a
linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's
receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike
Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package
with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less
memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA
that boosts image classification and downstream vision performance. Experimental results on NAT are competitive;
NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9%
ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. *
Neighborhood Attention compared to other attention patterns.
Taken from the original paper.
This model was contributed by Ali Hassani.
The original code can be found here.
Usage tips
One can use the [AutoImageProcessor] API to prepare images for the model.
NAT can be used as a backbone. When output_hidden_states = True,
it will output both hidden_states and reshaped_hidden_states.
The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than
(batch_size, height, width, num_channels).
Notes:
- NAT depends on NATTEN's implementation of Neighborhood Attention.
You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten,
or build on your system by running pip install natten.
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
- Patch size of 4 is only supported at the moment.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with NAT.
[NatForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
NatConfig
[[autodoc]] NatConfig
NatModel
[[autodoc]] NatModel
- forward
NatForImageClassification
[[autodoc]] NatForImageClassification
- forward |
ALBERT
Overview
The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma,
Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training
speed of BERT:
Splitting the embedding matrix into two smaller matrices.
Using repeating layers split among groups.
The abstract from the paper is the following:
Increasing model size when pretraining natural language representations often results in improved performance on
downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations,
longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction
techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows
that our proposed methods lead to models that scale much better compared to the original BERT. We also use a
self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks
with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and
SQuAD benchmarks while having fewer parameters compared to BERT-large.
This model was contributed by lysandre. This model jax version was contributed by
kamalkraj. The original code can be found here.
Usage tips
ALBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
than the left.
ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains
similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same
number of (repeating) layers.
Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters.
Layers are split in groups that share parameters (to save memory).
Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not.
This model was contributed by lysandre. This model jax version was contributed by
kamalkraj. The original code can be found here.
Resources
The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by ๐) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
[AlbertForSequenceClassification] is supported by this example script.
[TFAlbertForSequenceClassification] is supported by this example script.
[FlaxAlbertForSequenceClassification] is supported by this example script and notebook.
Check the Text classification task guide on how to use the model.
[AlbertForTokenClassification] is supported by this example script.
[TFAlbertForTokenClassification] is supported by this example script and notebook.
[FlaxAlbertForTokenClassification] is supported by this example script.
Token classification chapter of the ๐ค Hugging Face Course.
Check the Token classification task guide on how to use the model.
[AlbertForMaskedLM] is supported by this example script and notebook.
[TFAlbertForMaskedLM] is supported by this example script and notebook.
[FlaxAlbertForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the ๐ค Hugging Face Course.
Check the Masked language modeling task guide on how to use the model.
[AlbertForQuestionAnswering] is supported by this example script and notebook.
[TFAlbertForQuestionAnswering] is supported by this example script and notebook.
[FlaxAlbertForQuestionAnswering] is supported by this example script.
Question answering chapter of the ๐ค Hugging Face Course.
Check the Question answering task guide on how to use the model.
Multiple choice
[AlbertForMultipleChoice] is supported by this example script and notebook.
[TFAlbertForMultipleChoice] is supported by this example script and notebook.
Check the Multiple choice task guide on how to use the model.
AlbertConfig
[[autodoc]] AlbertConfig
AlbertTokenizer
[[autodoc]] AlbertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
AlbertTokenizerFast
[[autodoc]] AlbertTokenizerFast
Albert specific outputs
[[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput
[[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
AlbertModel
[[autodoc]] AlbertModel
- forward
AlbertForPreTraining
[[autodoc]] AlbertForPreTraining
- forward
AlbertForMaskedLM
[[autodoc]] AlbertForMaskedLM
- forward
AlbertForSequenceClassification
[[autodoc]] AlbertForSequenceClassification
- forward
AlbertForMultipleChoice
[[autodoc]] AlbertForMultipleChoice
AlbertForTokenClassification
[[autodoc]] AlbertForTokenClassification
- forward
AlbertForQuestionAnswering
[[autodoc]] AlbertForQuestionAnswering
- forward
TFAlbertModel
[[autodoc]] TFAlbertModel
- call
TFAlbertForPreTraining
[[autodoc]] TFAlbertForPreTraining
- call
TFAlbertForMaskedLM
[[autodoc]] TFAlbertForMaskedLM
- call
TFAlbertForSequenceClassification
[[autodoc]] TFAlbertForSequenceClassification
- call
TFAlbertForMultipleChoice
[[autodoc]] TFAlbertForMultipleChoice
- call
TFAlbertForTokenClassification
[[autodoc]] TFAlbertForTokenClassification
- call
TFAlbertForQuestionAnswering
[[autodoc]] TFAlbertForQuestionAnswering
- call
FlaxAlbertModel
[[autodoc]] FlaxAlbertModel
- call
FlaxAlbertForPreTraining
[[autodoc]] FlaxAlbertForPreTraining
- call
FlaxAlbertForMaskedLM
[[autodoc]] FlaxAlbertForMaskedLM
- call
FlaxAlbertForSequenceClassification
[[autodoc]] FlaxAlbertForSequenceClassification
- call
FlaxAlbertForMultipleChoice
[[autodoc]] FlaxAlbertForMultipleChoice
- call
FlaxAlbertForTokenClassification
[[autodoc]] FlaxAlbertForTokenClassification
- call
FlaxAlbertForQuestionAnswering
[[autodoc]] FlaxAlbertForQuestionAnswering
- call
|
ViTDet
Overview
The ViTDet model was proposed in Exploring Plain Vision Transformer Backbones for Object Detection by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He.
VitDet leverages the plain Vision Transformer for the task of object detection.
The abstract from the paper is the following:
We explore the plain, non-hierarchical Vision Transformer (ViT) as a backbone network for object detection. This design enables the original ViT architecture to be fine-tuned for object detection without needing to redesign a hierarchical backbone for pre-training. With minimal adaptations for fine-tuning, our plain-backbone detector can achieve competitive results. Surprisingly, we observe: (i) it is sufficient to build a simple feature pyramid from a single-scale feature map (without the common FPN design) and (ii) it is sufficient to use window attention (without shifting) aided with very few cross-window propagation blocks. With plain ViT backbones pre-trained as Masked Autoencoders (MAE), our detector, named ViTDet, can compete with the previous leading methods that were all based on hierarchical backbones, reaching up to 61.3 AP_box on the COCO dataset using only ImageNet-1K pre-training. We hope our study will draw attention to research on plain-backbone detectors.
This model was contributed by nielsr.
The original code can be found here.
Tips:
At the moment, only the backbone is available.
VitDetConfig
[[autodoc]] VitDetConfig
VitDetModel
[[autodoc]] VitDetModel
- forward |
Speech2Text
Overview
The Speech2Text model was proposed in fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a
transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST:
LibriSpeech, CoVoST 2, MuST-C.
This model was contributed by valhalla. The original code can be found here.
Inference
Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech
signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The
generate() method can be used for inference.
The [Speech2TextFeatureExtractor] class is responsible for extracting the log-mel filter-bank
features. The [Speech2TextProcessor] wraps [Speech2TextFeatureExtractor] and
[Speech2TextTokenizer] into a single instance to both extract the input features and decode the
predicted token ids.
The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to
install those packages before running the examples. You could either install those as extra speech dependencies with
pip install transformers"[speech, sentencepiece]" or install the packages separately with pip install torchaudio sentencepiece. Also torchaudio requires the development version of the libsndfile package which can be installed via a system package manager. On Ubuntu it can
be installed as follows: apt install libsndfile1-dev
ASR and Speech Translation
thon
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcription
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
Multilingual speech translation
For multilingual speech translation models, eos_token_id is used as the decoder_start_token_id and
the target language id is forced as the first generated token. To force the target language id as the first
generated token, pass the forced_bos_token_id parameter to the generate() method. The following
example shows how to transate English speech to French text using the facebook/s2t-medium-mustc-multilingual-st
checkpoint.
thon
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(
inputs["input_features"],
attention_mask=inputs["attention_mask"],
forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"],
)
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
translation
["(Vidรฉo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'รชtre accueillis dans son รฉvangile."]
See the model hub to look for Speech2Text checkpoints.
Speech2TextConfig
[[autodoc]] Speech2TextConfig
Speech2TextTokenizer
[[autodoc]] Speech2TextTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
Speech2TextFeatureExtractor
[[autodoc]] Speech2TextFeatureExtractor
- call
Speech2TextProcessor
[[autodoc]] Speech2TextProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
Speech2TextModel
[[autodoc]] Speech2TextModel
- forward
Speech2TextForConditionalGeneration
[[autodoc]] Speech2TextForConditionalGeneration
- forward
TFSpeech2TextModel
[[autodoc]] TFSpeech2TextModel
- call
TFSpeech2TextForConditionalGeneration
[[autodoc]] TFSpeech2TextForConditionalGeneration
- call
|
Autoformer
Overview
The Autoformer model was proposed in Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.
The abstract from the paper is the following:
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.
This model was contributed by elisim and kashif.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Autoformer blog-post in HuggingFace blog: Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
AutoformerConfig
[[autodoc]] AutoformerConfig
AutoformerModel
[[autodoc]] AutoformerModel
- forward
AutoformerForPrediction
[[autodoc]] AutoformerForPrediction
- forward |
CLIPSeg
Overview
The CLIPSeg model was proposed in Image Segmentation Using Text and Image Prompts by Timo Lรผddecke
and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen CLIP model for zero- and one-shot image segmentation.
The abstract from the paper is the following:
Image segmentation is usually addressed by training a
model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive
as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system
that can generate image segmentations based on arbitrary
prompts at test time. A prompt can be either a text or an
image. This approach enables us to create a unified model
(trained once) for three common segmentation tasks, which
come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation.
We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense
prediction. After training on an extended version of the
PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on
an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail.
This novel hybrid input allows for dynamic adaptation not
only to the three segmentation tasks mentioned above, but
to any binary segmentation task where a text or image query
can be formulated. Finally, we find our system to adapt well
to generalized queries involving affordances or properties
CLIPSeg overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
[CLIPSegForImageSegmentation] adds a decoder on top of [CLIPSegModel]. The latter is identical to [CLIPModel].
[CLIPSegForImageSegmentation] can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text
(provided to the model as input_ids) or an image (provided to the model as conditional_pixel_values). One can also provide custom
conditional embeddings (provided to the model as conditional_embeddings).
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with CLIPSeg. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook that illustrates zero-shot image segmentation with CLIPSeg.
CLIPSegConfig
[[autodoc]] CLIPSegConfig
- from_text_vision_configs
CLIPSegTextConfig
[[autodoc]] CLIPSegTextConfig
CLIPSegVisionConfig
[[autodoc]] CLIPSegVisionConfig
CLIPSegProcessor
[[autodoc]] CLIPSegProcessor
CLIPSegModel
[[autodoc]] CLIPSegModel
- forward
- get_text_features
- get_image_features
CLIPSegTextModel
[[autodoc]] CLIPSegTextModel
- forward
CLIPSegVisionModel
[[autodoc]] CLIPSegVisionModel
- forward
CLIPSegForImageSegmentation
[[autodoc]] CLIPSegForImageSegmentation
- forward |
Conditional DETR
Overview
The Conditional DETR model was proposed in Conditional DETR for Fast Training Convergence by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7ร to 10ร faster than DETR.
The abstract from the paper is the following:
The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7ร faster for the backbones R50 and R101 and 10ร faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.
Conditional DETR shows much faster convergence compared to the original DETR. Taken from the original paper.
This model was contributed by DepuMeng. The original code can be found here.
Resources
Object detection task guide
ConditionalDetrConfig
[[autodoc]] ConditionalDetrConfig
ConditionalDetrImageProcessor
[[autodoc]] ConditionalDetrImageProcessor
- preprocess
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation
ConditionalDetrFeatureExtractor
[[autodoc]] ConditionalDetrFeatureExtractor
- call
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation
ConditionalDetrModel
[[autodoc]] ConditionalDetrModel
- forward
ConditionalDetrForObjectDetection
[[autodoc]] ConditionalDetrForObjectDetection
- forward
ConditionalDetrForSegmentation
[[autodoc]] ConditionalDetrForSegmentation
- forward |
VisualBERT
Overview
The VisualBERT model was proposed in VisualBERT: A Simple and Performant Baseline for Vision and Language by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
VisualBERT is a neural network trained on a variety of (image, text) pairs.
The abstract from the paper is the following:
We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks.
VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an
associated input image with self-attention. We further propose two visually-grounded language model objectives for
pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2,
and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly
simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any
explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between
verbs and image regions corresponding to their arguments.
This model was contributed by gchhablani. The original code can be found here.
Usage tips
Most of the checkpoints provided work with the [VisualBertForPreTraining] configuration. Other
checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA ('visualbert-vqa'), VCR
('visualbert-vcr'), NLVR2 ('visualbert-nlvr2'). Hence, if you are not working on these downstream tasks, it is
recommended that you use the pretrained checkpoints.
For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints.
We do not provide the detector and its weights as a part of the package, but it will be available in the research
projects, and the states can be loaded directly into the detector provided.
VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice,
visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare
embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical
dimension.
To feed images to the model, each image is passed through a pre-trained object detector and the regions and the
bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained
CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding
layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set
appropriately for the textual and visual parts.
The [BertTokenizer] is used to encode the text. A custom detector/image processor must be used
to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models:
VisualBERT VQA demo notebook : This notebook
contains an example on VisualBERT VQA.
Generate Embeddings for VisualBERT (Colab Notebook) : This notebook contains
an example on how to generate visual embeddings.
The following example shows how to get the last hidden state using [VisualBertModel]:
thon
import torch
from transformers import BertTokenizer, VisualBertModel
model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre")
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
inputs = tokenizer("What is the man eating?", return_tensors="pt")
this is a custom function that returns the visual embeddings given the image path
visual_embeds = get_visual_embeddings(image_path)
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long)
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update(
{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask,
}
)
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
VisualBertConfig
[[autodoc]] VisualBertConfig
VisualBertModel
[[autodoc]] VisualBertModel
- forward
VisualBertForPreTraining
[[autodoc]] VisualBertForPreTraining
- forward
VisualBertForQuestionAnswering
[[autodoc]] VisualBertForQuestionAnswering
- forward
VisualBertForMultipleChoice
[[autodoc]] VisualBertForMultipleChoice
- forward
VisualBertForVisualReasoning
[[autodoc]] VisualBertForVisualReasoning
- forward
VisualBertForRegionToPhraseAlignment
[[autodoc]] VisualBertForRegionToPhraseAlignment
- forward |
BigBirdPegasus
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
The original code can be found here.
Usage tips
For an in-detail explanation on how BigBird's attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn't support num_random_blocks = 0.
BigBirdPegasus uses the PegasusTokenizer.
BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Translation task guide
Summarization task guide
BigBirdPegasusConfig
[[autodoc]] BigBirdPegasusConfig
- all
BigBirdPegasusModel
[[autodoc]] BigBirdPegasusModel
- forward
BigBirdPegasusForConditionalGeneration
[[autodoc]] BigBirdPegasusForConditionalGeneration
- forward
BigBirdPegasusForSequenceClassification
[[autodoc]] BigBirdPegasusForSequenceClassification
- forward
BigBirdPegasusForQuestionAnswering
[[autodoc]] BigBirdPegasusForQuestionAnswering
- forward
BigBirdPegasusForCausalLM
[[autodoc]] BigBirdPegasusForCausalLM
- forward |
EfficientNet
Overview
The EfficientNet model was proposed in EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models.
The abstract from the paper is the following:
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.
To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.
This model was contributed by adirik.
The original code can be found here.
EfficientNetConfig
[[autodoc]] EfficientNetConfig
EfficientNetImageProcessor
[[autodoc]] EfficientNetImageProcessor
- preprocess
EfficientNetModel
[[autodoc]] EfficientNetModel
- forward
EfficientNetForImageClassification
[[autodoc]] EfficientNetForImageClassification
- forward |
FLAN-UL2
Overview
Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year.
It was fine tuned using the "Flan" prompt tuning and dataset collection. Similar to Flan-T5, one can directly use FLAN-UL2 weights without finetuning the model:
According to the original blog here are the notable improvements:
The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large.
The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning.
The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget โmode tokensโ before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore.
Google has released the following variants:
The original checkpoints can be found here.
Running on low resource devices
The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use device_map="auto" to make sure you don't have any OOM issue!
thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2")
inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic']
Refer to T5's documentation page for API reference, tips, code examples and notebooks.
|
Nougat
Overview
The Nougat model was proposed in Nougat: Neural Optical Understanding for Academic Documents by
Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic. Nougat uses the same architecture as Donut, meaning an image Transformer
encoder and an autoregressive text Transformer decoder to translate scientific PDFs to markdown, enabling easier access to them.
The abstract from the paper is the following:
Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that performs an Optical Character Recognition (OCR) task for processing scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific documents. The proposed approach offers a promising solution to enhance the accessibility of scientific knowledge in the digital age, by bridging the gap between human-readable documents and machine-readable text. We release the models and code to accelerate future work on scientific text recognition.
Nougat high-level overview. Taken from the original paper.
This model was contributed by nielsr. The original code can be found
here.
Usage tips
The quickest way to get started with Nougat is by checking the tutorial
notebooks, which show how to use the model
at inference time as well as fine-tuning on custom data.
Nougat is always used within the VisionEncoderDecoder framework. The model is identical to Donut in terms of architecture.
Inference
Nougat's [VisionEncoderDecoder] model accepts images as input and makes use of
[~generation.GenerationMixin.generate] to autoregressively generate text given the input image.
The [NougatImageProcessor] class is responsible for preprocessing the input image and
[NougatTokenizerFast] decodes the generated target tokens to the target string. The
[NougatProcessor] wraps [NougatImageProcessor] and [NougatTokenizerFast] classes
into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step PDF transcription
from huggingface_hub import hf_hub_download
import re
from PIL import Image
from transformers import NougatProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = NougatProcessor.from_pretrained("facebook/nougat-base")
model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-base")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device) # doctest: +IGNORE_RESULT
prepare PDF image for the model
filepath = hf_hub_download(repo_id="hf-internal-testing/fixtures_docvqa", filename="nougat_paper.png", repo_type="dataset")
image = Image.open(filepath)
pixel_values = processor(image, return_tensors="pt").pixel_values
generate transcription (here we only generate 30 tokens)
outputs = model.generate(
pixel_values.to(device),
min_length=1,
max_new_tokens=30,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
)
sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0]
sequence = processor.post_process_generation(sequence, fix_markdown=False)
note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence
print(repr(sequence))
'\n\n# Nougat: Neural Optical Understanding for Academic Documents\n\n Lukas Blecher\n\nCorrespondence to: lblecher@'
See the model hub to look for Nougat checkpoints.
The model is identical to Donut in terms of architecture.
NougatImageProcessor
[[autodoc]] NougatImageProcessor
- preprocess
NougatTokenizerFast
[[autodoc]] NougatTokenizerFast
NougatProcessor
[[autodoc]] NougatProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
- post_process_generation |
LLaVa
Overview
LLaVa is an open-source chatbot trained by fine-tuning LlamA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. In other words, it is an multi-modal version of LLMs fine-tuned for chat / instructions.
The LLaVa model was proposed in Visual Instruction Tuning and improved in Improved Baselines with Visual Instruction Tuning by Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee.
The abstract from the paper is the following:
Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in โผ1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available
LLaVa architecture. Taken from the original paper.
This model was contributed by ArthurZ and ybelkada.
The original code can be found here.
Usage tips
We advise users to use padding_side="left" when computing batched generation as it leads to more accurate results. Simply make sure to call processor.tokenizer.padding_side = "left" before generating.
Note the model has not been explicitly trained to process multiple images in the same prompt, although this is technically possible, you may experience inaccurate results.
For better results, we recommend users to prompt the model with the correct prompt format:
"USER: <image>\n<prompt>ASSISTANT:"
For multiple turns conversation:
"USER: <image>\n<prompt1>ASSISTANT: <answer1>USER: <prompt2>ASSISTANT: <answer2>USER: <prompt3>ASSISTANT:"
Using Flash Attention 2
Flash Attention 2 is an even faster, optimized version of the previous optimization, please refer to the Flash Attention 2 section of performance docs.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with BEiT.
A Google Colab demo on how to run Llava on a free-tier Google colab instance leveraging 4-bit inference.
A similar notebook showcasing batched inference. ๐
LlavaConfig
[[autodoc]] LlavaConfig
LlavaProcessor
[[autodoc]] LlavaProcessor
LlavaForConditionalGeneration
[[autodoc]] LlavaForConditionalGeneration
- forward |
MegatronGPT2
Overview
The MegatronGPT2 model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
This model was contributed by jdemouth. The original code can be found here.
That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular, it
contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques.
Usage tips
We have provided pretrained GPT2-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O
megatron_gpt2_345m_v0_0.zip
Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily
be loaded by Hugging Face Transformers GPT2 implementation.
The following command allows you to do the conversion. We assume that the folder models/megatron_gpt2 contains
megatron_gpt2_345m_v0_0.zip and that the command is run from that folder:
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip
MegatronGPT2 architecture is the same as OpenAI GPT-2 . Refer to GPT-2 documentation for information on
configuration classes and their parameters.
|
OPT
Overview
The OPT model was proposed in Open Pre-trained Transformer Language Models by Meta AI.
OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.
The abstract from the paper is the following:
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
This model was contributed by Arthur Zucker, Younes Belkada, and Patrick Von Platen.
The original code can be found here.
Tips:
- OPT has the same architecture as [BartDecoder].
- Contrary to GPT2, OPT adds the EOS token </s> to the beginning of every prompt.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with OPT. If you're
interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on fine-tuning OPT with PEFT, bitsandbytes, and Transformers. ๐
A blog post on decoding strategies with OPT.
Causal language modeling chapter of the ๐ค Hugging Face Course.
[OPTForCausalLM] is supported by this causal language modeling example script and notebook.
[TFOPTForCausalLM] is supported by this causal language modeling example script and notebook.
[FlaxOPTForCausalLM] is supported by this causal language modeling example script.
Text classification task guide
[OPTForSequenceClassification] is supported by this example script and notebook.
[OPTForQuestionAnswering] is supported by this question answering example script and notebook.
Question answering chapter
of the ๐ค Hugging Face Course.
โก๏ธ Inference
A blog post on How ๐ค Accelerate runs very large models thanks to PyTorch with OPT.
Combining OPT and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon
import torch
from transformers import OPTForCausalLM, GPT2Tokenizer
device = "cuda" # the device to load the model onto
model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
prompt = ("A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the "
"Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived "
"there?")
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
'A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love'
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using facebook/opt-2.7b checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using facebook/opt-350m checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
OPTConfig
[[autodoc]] OPTConfig
OPTModel
[[autodoc]] OPTModel
- forward
OPTForCausalLM
[[autodoc]] OPTForCausalLM
- forward
OPTForSequenceClassification
[[autodoc]] OPTForSequenceClassification
- forward
OPTForQuestionAnswering
[[autodoc]] OPTForQuestionAnswering
- forward
TFOPTModel
[[autodoc]] TFOPTModel
- call
TFOPTForCausalLM
[[autodoc]] TFOPTForCausalLM
- call
FlaxOPTModel
[[autodoc]] FlaxOPTModel
- call
FlaxOPTForCausalLM
[[autodoc]] FlaxOPTForCausalLM
- call
|
T5v1.1
Overview
T5v1.1 was released in the google-research/text-to-text-transfer-transformer
repository by Colin Raffel et al. It's an improved version of the original T5 model.
This model was contributed by patrickvonplaten. The original code can be
found here.
Usage tips
One can directly plug in the weights of T5v1.1 into a T5 model, like so:
thon
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base")
T5 Version 1.1 includes the following improvements compared to the original T5 model:
GEGLU activation in the feed-forward hidden layer, rather than ReLU. See this paper.
Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks.
No parameter sharing between the embedding and classifier layer.
"xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d_model and smaller
num_heads and d_ff.
Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised
training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5
model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
google/t5-v1_1-small
google/t5-v1_1-base
google/t5-v1_1-large
google/t5-v1_1-xl
google/t5-v1_1-xxl.
Refer to T5's documentation page for all API reference, tips, code examples and notebooks.
|
ViTMAE
Overview
The ViTMAE model was proposed in Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li,
Piotr Dollรกr, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after
fine-tuning that outperform supervised pre-training.
The abstract from the paper is the following:
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the
input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates
only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask
tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs
enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity
models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream
tasks outperforms supervised pre-training and shows promising scaling behavior.
MAE architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by sayakpaul and
ariG23498 (equal contribution). The original code can be found here.
Usage tips
MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple:
by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use [ViTMAEForPreTraining] for this purpose.
After pre-training, one "throws away" the decoder used to reconstruct pixels, and one uses the encoder for fine-tuning/linear probing. This means that after
fine-tuning, one can directly plug in the weights into a [ViTForImageClassification].
One can use [ViTImageProcessor] to prepare images for the model. See the code examples for more info.
Note that the encoder of MAE is only used to encode the visual patches. The encoded patches are then concatenated with mask tokens, which the decoder (which also
consists of Transformer blocks) takes as input. Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted. Fixed
sin/cos position embeddings are added both to the input of the encoder and the decoder.
For a visual understanding of how MAEs work you can check out this post.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with ViTMAE.
[ViTMAEForPreTraining] is supported by this example script, allowing you to pre-train the model from scratch/further pre-train the model on custom data.
A notebook that illustrates how to visualize reconstructed pixel values with [ViTMAEForPreTraining] can be found here.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTMAEConfig
[[autodoc]] ViTMAEConfig
ViTMAEModel
[[autodoc]] ViTMAEModel
- forward
ViTMAEForPreTraining
[[autodoc]] transformers.ViTMAEForPreTraining
- forward
TFViTMAEModel
[[autodoc]] TFViTMAEModel
- call
TFViTMAEForPreTraining
[[autodoc]] transformers.TFViTMAEForPreTraining
- call
|
I-BERT
Overview
The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It's a quantized version of RoBERTa running
inference up to four times faster.
The abstract from the paper is the following:
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language
Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for
efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this,
previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot
efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM
processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes
the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for
nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT
inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using
RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to
the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for
INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has
been open-sourced.
This model was contributed by kssteven. The original code can be found here.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
IBertConfig
[[autodoc]] IBertConfig
IBertModel
[[autodoc]] IBertModel
- forward
IBertForMaskedLM
[[autodoc]] IBertForMaskedLM
- forward
IBertForSequenceClassification
[[autodoc]] IBertForSequenceClassification
- forward
IBertForMultipleChoice
[[autodoc]] IBertForMultipleChoice
- forward
IBertForTokenClassification
[[autodoc]] IBertForTokenClassification
- forward
IBertForQuestionAnswering
[[autodoc]] IBertForQuestionAnswering
- forward |
Decision Transformer
Overview
The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling
by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
The abstract from the paper is the following:
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem.
This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances
in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that
casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked
Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our
Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity,
Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on
Atari, OpenAI Gym, and Key-to-Door tasks.
This version of the model is for tasks where the state is a vector.
This model was contributed by edbeeching. The original code can be found here.
DecisionTransformerConfig
[[autodoc]] DecisionTransformerConfig
DecisionTransformerGPT2Model
[[autodoc]] DecisionTransformerGPT2Model
- forward
DecisionTransformerModel
[[autodoc]] DecisionTransformerModel
- forward |
Pegasus
Overview
The Pegasus model was proposed in PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
According to the abstract,
Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
input document and are generated together as one output sequence from the remaining sentences, similar to an
extractive summary.
Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
This model was contributed by sshleifer. The Authors' code can be found here.
Usage tips
Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG).
MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT)
GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder.
FP16 is not supported (help/ideas on this appreciated!).
The adafactor optimizer is recommended for pegasus fine-tuning.
Checkpoints
All the checkpoints are fine-tuned for summarization, besides
pegasus-large, whence the other checkpoints are fine-tuned:
Each checkpoint is 2.2 GB on disk and 568M parameters.
FP16 is not supported (help/ideas on this appreciated!).
Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
Full replication results and correctly pre-processed data can be found in this Issue.
Distilled checkpoints are described in this paper.
Implementation Notes
All models are transformer encoder-decoders with 16 layers in each component.
The implementation is completely inherited from [BartForConditionalGeneration]
Some key configuration differences:
static, sinusoidal position embeddings
the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
more beams are used (num_beams=8)
All pretrained pegasus checkpoints are the same besides three attributes: tokenizer.model_max_length (maximum
input size), max_length (the maximum number of tokens to generate) and length_penalty.
The code to convert checkpoints trained in the author's repo can be
found in convert_pegasus_tf_to_pytorch.py.
Usage Example
thon
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = "google/pegasus-xsum"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
batch = tokenizer(src_text, truncation=True, padding="longest", return_tensors="pt").to(device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert (
tgt_text[0]
== "California's largest electricity provider has turned off power to hundreds of thousands of customers."
)
Resources
Script to fine-tune pegasus
on the XSUM dataset. Data download instructions at examples/pytorch/summarization/.
Causal language modeling task guide
Translation task guide
Summarization task guide
PegasusConfig
[[autodoc]] PegasusConfig
PegasusTokenizer
warning: add_tokens does not work at the moment.
[[autodoc]] PegasusTokenizer
PegasusTokenizerFast
[[autodoc]] PegasusTokenizerFast
PegasusModel
[[autodoc]] PegasusModel
- forward
PegasusForConditionalGeneration
[[autodoc]] PegasusForConditionalGeneration
- forward
PegasusForCausalLM
[[autodoc]] PegasusForCausalLM
- forward
TFPegasusModel
[[autodoc]] TFPegasusModel
- call
TFPegasusForConditionalGeneration
[[autodoc]] TFPegasusForConditionalGeneration
- call
FlaxPegasusModel
[[autodoc]] FlaxPegasusModel
- call
- encode
- decode
FlaxPegasusForConditionalGeneration
[[autodoc]] FlaxPegasusForConditionalGeneration
- call
- encode
- decode
|
PoolFormer
Overview
The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture MetaFormer.
The abstract from the paper is the following:
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
The figure below illustrates the architecture of PoolFormer. Taken from the original paper.
This model was contributed by heytanay. The original code can be found here.
Usage tips
PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the hub.
One can use [PoolFormerImageProcessor] to prepare images for the model.
As most models, PoolFormer comes in different sizes, the details of which can be found in the table below.
| Model variant | Depths | Hidden sizes | Params (M) | ImageNet-1k Top 1 |
| :---------------: | ------------- | ------------------- | :------------: | :-------------------: |
| s12 | [2, 2, 6, 2] | [64, 128, 320, 512] | 12 | 77.2 |
| s24 | [4, 4, 12, 4] | [64, 128, 320, 512] | 21 | 80.3 |
| s36 | [6, 6, 18, 6] | [64, 128, 320, 512] | 31 | 81.4 |
| m36 | [6, 6, 18, 6] | [96, 192, 384, 768] | 56 | 82.1 |
| m48 | [8, 8, 24, 8] | [96, 192, 384, 768] | 73 | 82.5 |
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with PoolFormer.
[PoolFormerForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
PoolFormerConfig
[[autodoc]] PoolFormerConfig
PoolFormerFeatureExtractor
[[autodoc]] PoolFormerFeatureExtractor
- call
PoolFormerImageProcessor
[[autodoc]] PoolFormerImageProcessor
- preprocess
PoolFormerModel
[[autodoc]] PoolFormerModel
- forward
PoolFormerForImageClassification
[[autodoc]] PoolFormerForImageClassification
- forward |
YOSO
Overview
The YOSO model was proposed in You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. YOSO approximates standard softmax self-attention
via a Bernoulli sampling scheme based on Locality Sensitive Hashing (LSH). In principle, all the Bernoulli random variables can be sampled with
a single hash.
The abstract from the paper is the following:
Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is
the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically
on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling
attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear.
We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random
variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant).
This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of
LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence
length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark,
for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable
speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at this https URL
This model was contributed by novice03. The original code can be found here.
Usage tips
The YOSO attention algorithm is implemented through custom CUDA kernels, functions written in CUDA C++ that can be executed multiple times
in parallel on a GPU.
The kernels provide a fast_hash function, which approximates the random projections of the queries and keys using the Fast Hadamard Transform. Using these
hash codes, the lsh_cumulation function approximates self-attention via LSH-based Bernoulli sampling.
To use the custom kernels, the user should set config.use_expectation = False. To ensure that the kernels are compiled successfully,
the user must install the correct version of PyTorch and cudatoolkit. By default, config.use_expectation = True, which uses YOSO-E and
does not require compiling CUDA kernels.
YOSO Attention Algorithm. Taken from the original paper.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
YosoConfig
[[autodoc]] YosoConfig
YosoModel
[[autodoc]] YosoModel
- forward
YosoForMaskedLM
[[autodoc]] YosoForMaskedLM
- forward
YosoForSequenceClassification
[[autodoc]] YosoForSequenceClassification
- forward
YosoForMultipleChoice
[[autodoc]] YosoForMultipleChoice
- forward
YosoForTokenClassification
[[autodoc]] YosoForTokenClassification
- forward
YosoForQuestionAnswering
[[autodoc]] YosoForQuestionAnswering
- forward |
Trajectory Transformer
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The Trajectory Transformer model was proposed in Offline Reinforcement Learning as One Big Sequence Modeling Problem by Michael Janner, Qiyang Li, Sergey Levine.
The abstract from the paper is the following:
Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models,
leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence
modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards.
Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well
in other domains, such as natural-language processing, can also provide effective solutions to the RL problem.
To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture
to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence
modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common
in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction,
imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with
existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks.
This model was contributed by CarlCochet. The original code can be found here.
Usage tips
This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from
actions, states and rewards from all previous timesteps. This model will treat all these elements together
as one big sequence (a trajectory).
TrajectoryTransformerConfig
[[autodoc]] TrajectoryTransformerConfig
TrajectoryTransformerModel
[[autodoc]] TrajectoryTransformerModel
- forward |
StableLM
Overview
StableLM 3B 4E1T was proposed in StableLM 3B 4E1T: Technical Report by Stability AI and is the first model in a series of multi-epoch pre-trained language models.
Model Details
StableLM 3B 4E1T is a decoder-only base language model pre-trained on 1 trillion tokens of diverse English and code datasets for four epochs.
The model architecture is transformer-based with partial Rotary Position Embeddings, SwiGLU activation, LayerNorm, etc.
We also provide StableLM Zephyr 3B, an instruction fine-tuned version of the model that can be used for chat-based applications.
Usage Tips
The architecture is similar to LLaMA but with RoPE applied to 25% of head embedding dimensions, LayerNorm instead of RMSNorm, and optional QKV bias terms.
StableLM 3B 4E1T-based models uses the same tokenizer as [GPTNeoXTokenizerFast].
StableLM 3B 4E1T and StableLM Zephyr 3B can be found on the Huggingface Hub
The following code snippet demonstrates how to use StableLM 3B 4E1T for inference:
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t")
model.to(device)
model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_length=32, do_sample=True)
responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
responses
['The weather is always wonderful in Santa Barbara and, for visitors hoping to make the move to our beautiful seaside city, this town offers plenty of great places to']
Combining StableLM and Flash Attention 2
First, make sure to install the latest version of Flash Attention v2.
pip install -U flash-attn --no-build-isolation
Also make sure that your hardware is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash-attn repository. Note: you must load your model in half-precision (e.g. torch.bfloat16).
Now, to run the model with Flash Attention 2, refer to the snippet below:
thon
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-3b-4e1t", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2")
model.to(device)
model_inputs = tokenizer("The weather is always wonderful in", return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_length=32, do_sample=True)
responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
responses
['The weather is always wonderful in Santa Barbara and, for visitors hoping to make the move to our beautiful seaside city, this town offers plenty of great places to']
StableLmConfig
[[autodoc]] StableLmConfig
StableLmModel
[[autodoc]] StableLmModel
- forward
StableLmForCausalLM
[[autodoc]] StableLmForCausalLM
- forward
StableLmForSequenceClassification
[[autodoc]] StableLmForSequenceClassification
- forward |
BERTweet
Overview
The BERTweet model was proposed in BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al.,
2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks:
Part-of-speech tagging, Named-entity recognition and text classification.
This model was contributed by dqnguyen. The original code can be found here.
Usage example
thon
import torch
from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
For transformers v4.x+:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
For transformers v3.x:
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
With TensorFlow 2.0+:
from transformers import TFAutoModel
bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
This implementation is the same as BERT, except for tokenization method. Refer to BERT documentation for
API reference information.
BertweetTokenizer
[[autodoc]] BertweetTokenizer |
BridgeTower
Overview
The BridgeTower model was proposed in BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a
bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs.
This paper has been accepted to the AAAI'23 conference.
The abstract from the paper is the following:
Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years.
Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder.
Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder.
This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks.
In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs.
Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
BridgeTower architecture. Taken from the original paper.
This model was contributed by Anahita Bhiwandiwalla, Tiep Le and Shaoyen Tseng. The original code can be found here.
Usage tips and examples
BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers.
The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder.
In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture.
The [BridgeTowerProcessor] wraps [RobertaTokenizer] and [BridgeTowerImageProcessor] into a single instance to both
encode the text and prepare the images respectively.
The following example shows how to run contrastive learning using [BridgeTowerProcessor] and [BridgeTowerForContrastiveLearning].
thon
from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs
The following example shows how to run image-text retrieval using [BridgeTowerProcessor] and [BridgeTowerForImageAndTextRetrieval].
thon
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, 1].item()
The following example shows how to run masked language modeling using [BridgeTowerProcessor] and [BridgeTowerForMaskedLM].
thon
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
prepare inputs
encoding = processor(image, text, return_tensors="pt")
forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
.a cat looking out of the window.
Tips:
This implementation of BridgeTower uses [RobertaTokenizer] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings.
Checkpoints for pre-trained bridgeTower-base and bridgetower masked language modeling and image text matching are released.
Please refer to Table 5 for BridgeTower's performance on Image Retrieval and other down stream tasks.
The PyTorch version of this model is only available in torch 1.10 and higher.
BridgeTowerConfig
[[autodoc]] BridgeTowerConfig
BridgeTowerTextConfig
[[autodoc]] BridgeTowerTextConfig
BridgeTowerVisionConfig
[[autodoc]] BridgeTowerVisionConfig
BridgeTowerImageProcessor
[[autodoc]] BridgeTowerImageProcessor
- preprocess
BridgeTowerProcessor
[[autodoc]] BridgeTowerProcessor
- call
BridgeTowerModel
[[autodoc]] BridgeTowerModel
- forward
BridgeTowerForContrastiveLearning
[[autodoc]] BridgeTowerForContrastiveLearning
- forward
BridgeTowerForMaskedLM
[[autodoc]] BridgeTowerForMaskedLM
- forward
BridgeTowerForImageAndTextRetrieval
[[autodoc]] BridgeTowerForImageAndTextRetrieval
- forward |
BART
Overview
The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation,
Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019.
According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a
left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme,
where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new
state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains
of up to 6 ROUGE.
This model was contributed by sshleifer. The authors' code can be found here.
Usage tips:
BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder:
mask random tokens (like in BERT)
delete random tokens
mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token)
permute sentences
rotate the document to make it start at a specific token
Implementation Notes
Bart doesn't use token_type_ids for sequence classification. Use [BartTokenizer] or
[~BartTokenizer.encode] to get the proper splitting.
The forward pass of [BartModel] will create the decoder_input_ids if they are not passed.
This is different than some other modeling APIs. A typical use case of this feature is mask filling.
Model predictions are intended to be identical to the original implementation when
forced_bos_token_id=0. This only works, however, if the string you pass to
[fairseq.encode] starts with a space.
[~generation.GenerationMixin.generate] should be used for conditional generation tasks like
summarization, see the example in that docstrings.
Models that load the facebook/bart-large-cnn weights will not have a mask_token_id, or be able to perform
mask-filling tasks.
Mask Filling
The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks.
thon
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
]
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on Distributed Training: Train BART/T5 for Summarization using ๐ค Transformers and Amazon SageMaker.
A notebook on how to finetune BART for summarization with fastai using blurr. ๐
A notebook on how to finetune BART for summarization in two languages with Trainer class. ๐
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
[FlaxBartForConditionalGeneration] is supported by this example script.
An example of how to train [BartForConditionalGeneration] with a Hugging Face datasets object can be found in this forum discussion
Summarization chapter of the ๐ค Hugging Face course.
Summarization task guide
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
[FlaxBartForConditionalGeneration] is supported by this example script and notebook.
Masked language modeling chapter of the ๐ค Hugging Face Course.
Masked language modeling task guide
A notebook on how to finetune mBART using Seq2SeqTrainer for Hindi to English translation. ๐
[BartForConditionalGeneration] is supported by this example script and notebook.
[TFBartForConditionalGeneration] is supported by this example script and notebook.
Translation task guide
See also:
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
- Distilled checkpoints are described in this paper.
BartConfig
[[autodoc]] BartConfig
- all
BartTokenizer
[[autodoc]] BartTokenizer
- all
BartTokenizerFast
[[autodoc]] BartTokenizerFast
- all
BartModel
[[autodoc]] BartModel
- forward
BartForConditionalGeneration
[[autodoc]] BartForConditionalGeneration
- forward
BartForSequenceClassification
[[autodoc]] BartForSequenceClassification
- forward
BartForQuestionAnswering
[[autodoc]] BartForQuestionAnswering
- forward
BartForCausalLM
[[autodoc]] BartForCausalLM
- forward
TFBartModel
[[autodoc]] TFBartModel
- call
TFBartForConditionalGeneration
[[autodoc]] TFBartForConditionalGeneration
- call
TFBartForSequenceClassification
[[autodoc]] TFBartForSequenceClassification
- call
FlaxBartModel
[[autodoc]] FlaxBartModel
- call
- encode
- decode
FlaxBartForConditionalGeneration
[[autodoc]] FlaxBartForConditionalGeneration
- call
- encode
- decode
FlaxBartForSequenceClassification
[[autodoc]] FlaxBartForSequenceClassification
- call
- encode
- decode
FlaxBartForQuestionAnswering
[[autodoc]] FlaxBartForQuestionAnswering
- call
- encode
- decode
FlaxBartForCausalLM
[[autodoc]] FlaxBartForCausalLM
- call
|
TAPEX
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The TAPEX model was proposed in TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu,
Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after
which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking.
TAPEX has been fine-tuned on several datasets:
- SQA (Sequential Question Answering by Microsoft)
- WTQ (Wiki Table Questions by Stanford University)
- WikiSQL (by Salesforce)
- TabFact (by USCB NLP Lab).
The abstract from the paper is the following:
Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is
still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we
propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically
synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL
executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that
TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements
on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy
to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs
and to achieve new state-of-the-art results on various downstream tasks.
Usage tips
TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model.
TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact.
Sentences + tables are presented to the model as sentence + " " + linearized table. The linearized table has the following format:
col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : .
TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer,
and it will automatically create the input_ids and attention_mask (as shown in the usage examples below).
Usage: inference
Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model.
We use the Auto API, which will automatically instantiate the appropriate tokenizer ([TapexTokenizer]) and model ([BartForConditionalGeneration]) for us,
based on the configuration file of the checkpoint on the hub.
thon
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-large-finetuned-wtq")
prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
question = "how many movies does Leonardo Di Caprio have?"
encoding = tokenizer(table, question, return_tensors="pt")
let the model generate an answer autoregressively
outputs = model.generate(**encoding)
decode back to text
predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(predicted_answer)
53
Note that [TapexTokenizer] also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table
and multiple questions, or a batch of a single query and multiple tables. Let's illustrate this:
thon
prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
questions = [
"how many movies does Leonardo Di Caprio have?",
"which actor has 69 movies?",
"what's the first name of the actor who has 87 movies?",
]
encoding = tokenizer(table, questions, padding=True, return_tensors="pt")
let the model generate an answer autoregressively
outputs = model.generate(**encoding)
decode back to text
tokenizer.batch_decode(outputs, skip_special_tokens=True)
[' 53', ' george clooney', ' brad pitt']
In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents
of a table), one can instantiate a [BartForSequenceClassification] model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important
benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the Auto API.
thon
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
prepare table + sentence
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
sentence = "George Clooney has 30 movies"
encoding = tokenizer(table, sentence, return_tensors="pt")
forward pass
outputs = model(**encoding)
print prediction
predicted_class_idx = outputs.logits[0].argmax(dim=0).item()
print(model.config.id2label[predicted_class_idx])
Refused
TAPEX architecture is the same as BART, except for tokenization. Refer to BART documentation for information on
configuration classes and their parameters. TAPEX-specific tokenizer is documented below.
TapexTokenizer
[[autodoc]] TapexTokenizer
- call
- save_vocabulary |
EfficientFormer
Overview
The EfficientFormer model was proposed in EfficientFormer: Vision Transformers at MobileNet Speed
by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a
dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object
detection and semantic segmentation.
The abstract from the paper is the following:
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks.
However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally
times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly
challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation
complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still
unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance?
To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs.
Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm.
Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer.
Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices.
Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on
iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2ร1.4 (1.6 ms, 74.7% top-1),} and our largest model,
EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can
reach extremely low latency on mobile devices while maintaining high performance.
This model was contributed by novice03 and Bearnardd.
The original code can be found here. The TensorFlow version of this model was added by D-Roberts.
Documentation resources
Image classification task guide
EfficientFormerConfig
[[autodoc]] EfficientFormerConfig
EfficientFormerImageProcessor
[[autodoc]] EfficientFormerImageProcessor
- preprocess
EfficientFormerModel
[[autodoc]] EfficientFormerModel
- forward
EfficientFormerForImageClassification
[[autodoc]] EfficientFormerForImageClassification
- forward
EfficientFormerForImageClassificationWithTeacher
[[autodoc]] EfficientFormerForImageClassificationWithTeacher
- forward
TFEfficientFormerModel
[[autodoc]] TFEfficientFormerModel
- call
TFEfficientFormerForImageClassification
[[autodoc]] TFEfficientFormerForImageClassification
- call
TFEfficientFormerForImageClassificationWithTeacher
[[autodoc]] TFEfficientFormerForImageClassificationWithTeacher
- call
|
MADLAD-400
Overview
MADLAD-400 models were released in the paper MADLAD-400: A Multilingual And Document-Level Large Audited Dataset.
The abstract from the paper is the following:
We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss
the limitations revealed by self-auditing MADLAD-400, and the role data auditing
had in the dataset creation process. We then train and release a 10.7B-parameter
multilingual machine translation model on 250 billion tokens covering over 450
languages using publicly available data, and find that it is competitive with models
that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot
translation. We make the baseline models 1
available to the research community.
This model was added by Juarez Bochi. The original checkpoints can be found here.
This is a machine translation model that supports many low-resource languages, and that is competitive with models that are significantly larger.
One can directly use MADLAD-400 weights without finetuning the model:
thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/madlad400-3b-mt")
tokenizer = AutoTokenizer.from_pretrained("google/madlad400-3b-mt")
inputs = tokenizer("<2pt> I love pizza!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Eu amo pizza!']
Google has released the following variants:
google/madlad400-3b-mt
google/madlad400-7b-mt
google/madlad400-7b-mt-bt
google/madlad400-10b-mt
The original checkpoints can be found here.
Refer to T5's documentation page for all API references, code examples, and notebooks. For more details regarding training and evaluation of the MADLAD-400, refer to the model card.
|
Mamba
Overview
The Mamba model was proposed in Mamba: Linear-Time Sequence Modeling with Selective State Spaces by Albert Gu and Tri Dao.
This model is a new paradigm architecture based on state-space-models. You can read more about the intuition behind these here.
The abstract from the paper is the following:
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5ร higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.
Tips:
Mamba is a new state space model architecture that rivals the classic Transformers. It is based on the line of progress on structured state space models, with an efficient hardware-aware design and implementation in the spirit of FlashAttention.
Mamba stacks mixer layers, which are the equivalent of Attention layers. The core logic of mamba is held in the MambaMixer class.
Two implementations cohabit: one is optimized and uses fast cuda kernels, while the other one is naive but can run on any device!
The current implementation leverages the original cuda kernels: the equivalent of flash attention for Mamba are hosted in the mamba-ssm and the causal_conv1d repositories. Make sure to install them if your hardware supports them!
Contributions to make the naive path faster are welcome ๐ค
This model was contributed by ArthurZ.
The original code can be found here.
Usage
A simple generation example:
thon
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("ArthurZ/mamba-130m")
tokenizer.pad_token = tokenizer.eos_token
model = MambaForCausalLM.from_pretrained("ArthurZ/mamba-130m", vocab_size=50280, num_hidden_layers=24, torch_dtype=torch.float32)
model.config.use_cache = True
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]
out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))
Peft finetuning
The slow version is not very stable for training, and the fast one needs float32!
python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
model_id = "ArthurZ/mamba-2.8b"
tokenizer = AutoTokenizer.from_pretrained(model_id, pad_token ="<s>")
model = AutoModelForCausalLM.from_pretrained(model_id)
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules="all-linear",
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
MambaConfig
[[autodoc]] MambaConfig
MambaModel
[[autodoc]] MambaModel
- forward
MambaLMHeadModel
[[autodoc]] MambaForCausalLM
- forward |
Convolutional Vision Transformer (CvT)
Overview
The CvT model was proposed in CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs.
The abstract from the paper is the following:
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT)
in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through
two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer
block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs)
to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention,
global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves
state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition,
performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on
ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding,
a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.
This model was contributed by anugunj. The original code can be found here.
Usage tips
CvT models are regular Vision Transformers, but trained with convolutions. They outperform the original model (ViT) when fine-tuned on ImageNet-1K and CIFAR-100.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [AutoImageProcessor] and [ViTForImageClassification] by [CvtForImageClassification]).
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with CvT.
[CvtForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
CvtConfig
[[autodoc]] CvtConfig
CvtModel
[[autodoc]] CvtModel
- forward
CvtForImageClassification
[[autodoc]] CvtForImageClassification
- forward
TFCvtModel
[[autodoc]] TFCvtModel
- call
TFCvtForImageClassification
[[autodoc]] TFCvtForImageClassification
- call
|
DINOv2
Overview
The DINOv2 model was proposed in DINOv2: Learning Robust Visual Features without Supervision by
Maxime Oquab, Timothรฉe Darcet, Thรฉo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervรฉ Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
DINOv2 is an upgrade of DINO, a self-supervised method applied on Vision Transformers. This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning.
The abstract from the paper is the following:
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
The model can be traced using torch.jit.trace which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4.
thon
import torch
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
model = AutoModel.from_pretrained('facebook/dinov2-base')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs[0]
We have to force return_dict=False for tracing
model.config.return_dict = False
with torch.no_grad():
traced_model = torch.jit.trace(model, [inputs.pixel_values])
traced_outputs = traced_model(inputs.pixel_values)
print((last_hidden_states - traced_outputs[0]).abs().max())
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with DPT.
Demo notebooks for DINOv2 can be found here. ๐
[Dinov2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Dinov2Config
[[autodoc]] Dinov2Config
Dinov2Model
[[autodoc]] Dinov2Model
- forward
Dinov2ForImageClassification
[[autodoc]] Dinov2ForImageClassification
- forward |
UnivNet
Overview
The UnivNet model was proposed in UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim.
The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in transformers is the generator, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The discriminator used to train the generator is not implemented.
The abstract from the paper is the following:
Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.
Tips:
The noise_sequence argument for [UnivNetModel.forward] should be standard Gaussian noise (such as from torch.randn) of shape ([batch_size], noise_length, model.config.model_in_channels), where noise_length should match the length dimension (dimension 1) of the input_features argument. If not supplied, it will be randomly generated; a torch.Generator can be supplied to the generator argument so that the forward pass can be reproduced. (Note that [UnivNetFeatureExtractor] will return generated noise by default, so it shouldn't be necessary to generate noise_sequence manually.)
Padding added by [UnivNetFeatureExtractor] can be removed from the [UnivNetModel] output through the [UnivNetFeatureExtractor.batch_decode] method, as shown in the usage example below.
Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying pad_end = True to [UnivNetFeatureExtractor.__call__]. See this issue for more details.
Usage Example:
thon
import torch
from scipy.io.wavfile import write
from datasets import Audio, load_dataset
from transformers import UnivNetFeatureExtractor, UnivNetModel
model_id_or_path = "dg845/univnet-dev"
model = UnivNetModel.from_pretrained(model_id_or_path)
feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
Resample the audio to the model and feature extractor's sampling rate.
ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples.
inputs = feature_extractor(
ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt"
)
with torch.no_grad():
audio = model(**inputs)
Remove the extra padding at the end of the output.
audio = feature_extractor.batch_decode(**audio)[0]
Convert to wav file
write("sample_audio.wav", feature_extractor.sampling_rate, audio)
This model was contributed by dg845.
To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at maum-ai/univnet with pretrained checkpoints here.
UnivNetConfig
[[autodoc]] UnivNetConfig
UnivNetFeatureExtractor
[[autodoc]] UnivNetFeatureExtractor
- call
UnivNetModel
[[autodoc]] UnivNetModel
- forward |
Jukebox
Overview
The Jukebox model was proposed in Jukebox: A generative model for music
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on
an artist, genres and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
As shown on the following figure, Jukebox is made of 3 priors which are decoder only models. They follow the architecture described in Generating Long Sequences with Sparse Transformers, modified to support longer context length.
First, a autoencoder is used to encode the text lyrics. Next, the first (also called top_prior) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an AudioConditioner module. TheAudioConditioner upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution.
The metadata such as artist, genre and timing are passed to each prior, in the form of a start token and positional embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio.
This model was contributed by Arthur Zucker.
The original code can be found here.
Usage tips
This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face trainer!
This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use accelerate.
Contrary to the paper, the order of the priors goes from 0 to 1 as it felt more intuitive : we sample starting from 0.
Primed sampling (conditioning the sampling on raw audio) requires more memory than ancestral sampling and should be used with fp16 set to True.
This model was contributed by Arthur Zucker.
The original code can be found here.
JukeboxConfig
[[autodoc]] JukeboxConfig
JukeboxPriorConfig
[[autodoc]] JukeboxPriorConfig
JukeboxVQVAEConfig
[[autodoc]] JukeboxVQVAEConfig
JukeboxTokenizer
[[autodoc]] JukeboxTokenizer
- save_vocabulary
JukeboxModel
[[autodoc]] JukeboxModel
- ancestral_sample
- primed_sample
- continue_sample
- upsample
- _sample
JukeboxPrior
[[autodoc]] JukeboxPrior
- sample
- forward
JukeboxVQVAE
[[autodoc]] JukeboxVQVAE
- forward
- encode
- decode |
MusicGen
Overview
The MusicGen model was proposed in the paper Simple and Controllable Music Generation
by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Dรฉfossez.
MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned
on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a
sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or audio codes,
conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec,
to recover the audio waveform.
Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of
the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g.
hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass.
The abstract from the paper is the following:
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates
over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised
of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for
cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen
can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better
controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human
studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark.
Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.
This model was contributed by sanchit-gandhi. The original code can be found
here. The pre-trained checkpoints can be found on the
Hugging Face Hub.
Usage tips
After downloading the original checkpoints from here , you can convert them using the conversion script available at
src/transformers/models/musicgen/convert_musicgen_transformers.py with the following command:
python src/transformers/models/musicgen/convert_musicgen_transformers.py \
--checkpoint small --pytorch_dump_folder /output/path --safe_serialization
Generation
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly
better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default,
and can be explicitly specified by setting do_sample=True in the call to [MusicgenForConditionalGeneration.generate],
or by overriding the model's generation config (see below).
Generation is limited by the sinusoidal positional embeddings to 30 second inputs. Meaning, MusicGen cannot generate more
than 30 seconds of audio (1503 tokens), and input audio passed by Audio-Prompted Generation contributes to this limit so,
given an input of 20 seconds of audio, MusicGen cannot generate more than 10 seconds of additional audio.
Transformers supports both mono (1-channel) and stereo (2-channel) variants of MusicGen. The mono channel versions
generate a single set of codebooks. The stereo versions generate 2 sets of codebooks, 1 for each channel (left/right),
and each set of codebooks is decoded independently through the audio compression model. The audio streams for each
channel are combined to give the final stereo output.
Unconditional Generation
The inputs for unconditional (or 'null') generation can be obtained through the method
[MusicgenForConditionalGeneration.get_unconditional_inputs]:
thon
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
unconditional_inputs = model.get_unconditional_inputs(num_samples=1)
audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256)
The audio outputs are a three-dimensional Torch tensor of shape (batch_size, num_channels, sequence_length). To listen
to the generated audio samples, you can either play them in an ipynb notebook:
thon
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
Or save them as a .wav file using a third-party library, e.g. scipy:
thon
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
Text-Conditional Generation
The model can generate an audio sample conditioned on a text prompt through use of the [MusicgenProcessor] to pre-process
the inputs:
thon
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
The guidance_scale is used in classifier free guidance (CFG), setting the weighting between the conditional logits
(which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or
'null' prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer audio quality. CFG is enabled by setting guidance_scale > 1. For best results,
use guidance_scale=3 (default).
Audio-Prompted Generation
The same [MusicgenProcessor] can be used to pre-process an audio prompt that is used for audio continuation. In the
following example, we load an audio file using the ๐ค Datasets library, which can be pip installed through the command
below:
pip install --upgrade pip
pip install datasets[audio]
thon
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
take the first half of the audio sample
sample["array"] = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
audio=sample["array"],
sampling_rate=sample["sampling_rate"],
text=["80s blues track with groovy saxophone"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
For batched audio-prompted generation, the generated audio_values can be post-processed to remove padding by using the
[MusicgenProcessor] class:
thon
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
take the first quarter of the audio sample
sample_1 = sample["array"][: len(sample["array"]) // 4]
take the first half of the audio sample
sample_2 = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
audio=[sample_1, sample_2],
sampling_rate=sample["sampling_rate"],
text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
post-process to remove padding from the batched audio
audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask)
Generation Configuration
The default parameters that control the generation process, such as sampling, guidance scale and number of generated
tokens, can be found in the model's generation config, and updated as desired:
thon
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inspect the default generation config
model.generation_config
increase the guidance scale to 4.0
model.generation_config.guidance_scale = 4.0
decrease the max length to 256 tokens
model.generation_config.max_length = 256
Note that any arguments passed to the generate method will supersede those in the generation config, so setting
do_sample=False in the call to generate will supersede the setting of model.generation_config.do_sample in the
generation config.
Model Structure
The MusicGen model can be de-composed into three distinct stages:
1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
3. Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder
Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class [MusicgenForCausalLM],
or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class
[MusicgenForConditionalGeneration]. If only the decoder needs to be loaded from the pre-trained checkpoint, it can be loaded by first
specifying the correct config, or be accessed through the .decoder attribute of the composite model:
thon
from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration
Option 1: get decoder config and pass to .from_pretrained
decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder
decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
Option 2: load the entire composite model, but only return the decoder
decoder = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small").decoder
Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder [MusicgenForCausalLM]
can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can
be combined with the frozen text encoder and audio encoder/decoders to recover the composite [MusicgenForConditionalGeneration]
model.
Tips:
* MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model.
* Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable do_sample in the call to [MusicgenForConditionalGeneration.generate]
MusicgenDecoderConfig
[[autodoc]] MusicgenDecoderConfig
MusicgenConfig
[[autodoc]] MusicgenConfig
MusicgenProcessor
[[autodoc]] MusicgenProcessor
MusicgenModel
[[autodoc]] MusicgenModel
- forward
MusicgenForCausalLM
[[autodoc]] MusicgenForCausalLM
- forward
MusicgenForConditionalGeneration
[[autodoc]] MusicgenForConditionalGeneration
- forward |
Swin Transformer
Overview
The Swin Transformer was proposed in Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
The abstract from the paper is the following:
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone
for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains,
such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text.
To address these differences, we propose a hierarchical Transformer whose representation is computed with \bold{S}hifted
\bold{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping
local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at
various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it
compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense
prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation
(53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and
+2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.
The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.
Swin Transformer architecture. Taken from the original paper.
This model was contributed by novice03. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here.
Usage tips
Swin pads the inputs supporting any input height and width (if divisible by 32).
Swin can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, sequence_length, num_channels).
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with Swin Transformer.
[SwinForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
[SwinForMaskedImageModeling] is supported by this example script.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SwinConfig
[[autodoc]] SwinConfig
SwinModel
[[autodoc]] SwinModel
- forward
SwinForMaskedImageModeling
[[autodoc]] SwinForMaskedImageModeling
- forward
SwinForImageClassification
[[autodoc]] transformers.SwinForImageClassification
- forward
TFSwinModel
[[autodoc]] TFSwinModel
- call
TFSwinForMaskedImageModeling
[[autodoc]] TFSwinForMaskedImageModeling
- call
TFSwinForImageClassification
[[autodoc]] transformers.TFSwinForImageClassification
- call
|
Perceiver
Overview
The Perceiver IO model was proposed in Perceiver IO: A General Architecture for Structured Inputs &
Outputs by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch,
Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M.
Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira.
Perceiver IO is a generalization of Perceiver to handle arbitrary outputs in
addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to
classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio.
This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is
linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process
inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example,
Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs.
The abstract from the paper is the following:
The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point
clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of
inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without
sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce
outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales
linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves
strong results on tasks with highly structured output spaces, such as natural language and visual understanding,
StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT
baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art
performance on Sintel optical flow estimation.
Here's a TLDR explaining how Perceiver works:
The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale
quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512
tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set
of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't
depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are
randomly initialized, after which they are trained end-to-end using backpropagation.
Internally, [PerceiverModel] will create the latents, which is a tensor of shape (batch_size, num_latents,
d_latents). One must provide inputs (which could be text, images, audio, you name it!) to the model, which it will
use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One
can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along
the sequence dimension, and placing a linear layer on top of that to project the d_latents to num_labels.
This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up
work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The
idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the
last hidden states of the latents, using the outputs as queries, and the latents as keys and values.
So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input
length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes,
providing inputs of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the
outputs as being of shape: (batch_size, 2048, 768). Next, one performs cross-attention with the final hidden states
of the latents to update the outputs tensor. After cross-attention, one still has a tensor of shape (batch_size,
2048, 768). One can then place a regular language modeling head on top, to project the last dimension to the
vocabulary size of the model, i.e. creating logits of shape (batch_size, 2048, 262) (as Perceiver uses a vocabulary
size of 262 byte IDs).
Perceiver IO architecture. Taken from the original paper
This model was contributed by nielsr. The original code can be found
here.
Perceiver does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Resources
The quickest way to get started with the Perceiver is by checking the tutorial
notebooks.
Refer to the blog post if you want to fully understand how the model works and
is implemented in the library. Note that the models available in the library only showcase some examples of what you can do
with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection,
audio classification, video classification, etc.
Text classification task guide
Masked language modeling task guide
Image classification task guide
Perceiver specific outputs
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverDecoderOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassifierOutput
PerceiverConfig
[[autodoc]] PerceiverConfig
PerceiverTokenizer
[[autodoc]] PerceiverTokenizer
- call
PerceiverFeatureExtractor
[[autodoc]] PerceiverFeatureExtractor
- call
PerceiverImageProcessor
[[autodoc]] PerceiverImageProcessor
- preprocess
PerceiverTextPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverTextPreprocessor
PerceiverImagePreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverImagePreprocessor
PerceiverOneHotPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor
PerceiverAudioPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor
PerceiverMultimodalPreprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor
PerceiverProjectionDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionDecoder
PerceiverBasicDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicDecoder
PerceiverClassificationDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationDecoder
PerceiverOpticalFlowDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder
PerceiverBasicVideoAutoencodingDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder
PerceiverMultimodalDecoder
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder
PerceiverProjectionPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor
PerceiverAudioPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor
PerceiverClassificationPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor
PerceiverMultimodalPostprocessor
[[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor
PerceiverModel
[[autodoc]] PerceiverModel
- forward
PerceiverForMaskedLM
[[autodoc]] PerceiverForMaskedLM
- forward
PerceiverForSequenceClassification
[[autodoc]] PerceiverForSequenceClassification
- forward
PerceiverForImageClassificationLearned
[[autodoc]] PerceiverForImageClassificationLearned
- forward
PerceiverForImageClassificationFourier
[[autodoc]] PerceiverForImageClassificationFourier
- forward
PerceiverForImageClassificationConvProcessing
[[autodoc]] PerceiverForImageClassificationConvProcessing
- forward
PerceiverForOpticalFlow
[[autodoc]] PerceiverForOpticalFlow
- forward
PerceiverForMultimodalAutoencoding
[[autodoc]] PerceiverForMultimodalAutoencoding
- forward |
X-MOD
Overview
The X-MOD model was proposed in Lifting the Curse of Multilinguality by Pre-training Modular Transformers by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe.
X-MOD extends multilingual masked language models like XLM-R to include language-specific modular components (language adapters) during pre-training. For fine-tuning, the language adapters in each transformer layer are frozen.
The abstract from the paper is the following:
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-MOD) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
This model was contributed by jvamvas.
The original code can be found here and the original documentation is found here.
Usage tips
Tips:
- X-MOD is similar to XLM-R, but a difference is that the input language needs to be specified so that the correct language adapter can be activated.
- The main models โ base and large โ have adapters for 81 languages.
Adapter Usage
Input language
There are two ways to specify the input language:
1. By setting a default language before using the model:
thon
from transformers import XmodModel
model = XmodModel.from_pretrained("facebook/xmod-base")
model.set_default_language("en_XX")
By explicitly passing the index of the language adapter for each sample:
thon
import torch
input_ids = torch.tensor(
[
[0, 581, 10269, 83, 99942, 136, 60742, 23, 70, 80583, 18276, 2],
[0, 1310, 49083, 443, 269, 71, 5486, 165, 60429, 660, 23, 2],
]
)
lang_ids = torch.LongTensor(
[
0, # en_XX
8, # de_DE
]
)
output = model(input_ids, lang_ids=lang_ids)
Fine-tuning
The paper recommends that the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided:
thon
model.freeze_embeddings_and_language_adapters()
Fine-tune the model
Cross-lingual transfer
After fine-tuning, zero-shot cross-lingual transfer can be tested by activating the language adapter of the target language:
thon
model.set_default_language("de_DE")
Evaluate the model on German examples
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XmodConfig
[[autodoc]] XmodConfig
XmodModel
[[autodoc]] XmodModel
- forward
XmodForCausalLM
[[autodoc]] XmodForCausalLM
- forward
XmodForMaskedLM
[[autodoc]] XmodForMaskedLM
- forward
XmodForSequenceClassification
[[autodoc]] XmodForSequenceClassification
- forward
XmodForMultipleChoice
[[autodoc]] XmodForMultipleChoice
- forward
XmodForTokenClassification
[[autodoc]] XmodForTokenClassification
- forward
XmodForQuestionAnswering
[[autodoc]] XmodForQuestionAnswering
- forward |
DistilBERT
Overview
The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a
distilled version of BERT, and the paper DistilBERT, a
distilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a
small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than
google-bert/bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language
understanding benchmark.
The abstract from the paper is the following:
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP),
operating these large models in on-the-edge and/or under constrained computational training or inference budgets
remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation
model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger
counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage
knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by
40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive
biases learned by larger models during pretraining, we introduce a triple loss combining language modeling,
distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we
demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device
study.
This model was contributed by victorsanh. This model jax version was
contributed by kamalkraj. The original code can be found here.
Usage tips
DistilBERT doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or [SEP]).
DistilBERT doesn't have options to select the input positions (position_ids input). This could be added if
necessary though, just let us know if you need this option.
Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning itโs been trained to predict the same probabilities as the larger model. The actual objective is a combination of:
finding the same probabilities as the teacher model
predicting the masked tokens correctly (but no next-sentence objective)
a cosine similarity between the hidden states of the student and the teacher model
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on Getting Started with Sentiment Analysis using Python with DistilBERT.
A blog post on how to train DistilBERT with Blurr for sequence classification.
A blog post on how to use Ray to tune DistilBERT hyperparameters.
A blog post on how to train DistilBERT with Hugging Face and Amazon SageMaker.
A notebook on how to finetune DistilBERT for multi-label classification. ๐
A notebook on how to finetune DistilBERT for multiclass classification with PyTorch. ๐
A notebook on how to finetune DistilBERT for text classification in TensorFlow. ๐
[DistilBertForSequenceClassification] is supported by this example script and notebook.
[TFDistilBertForSequenceClassification] is supported by this example script and notebook.
[FlaxDistilBertForSequenceClassification] is supported by this example script and notebook.
Text classification task guide
[DistilBertForTokenClassification] is supported by this example script and notebook.
[TFDistilBertForTokenClassification] is supported by this example script and notebook.
[FlaxDistilBertForTokenClassification] is supported by this example script.
Token classification chapter of the ๐ค Hugging Face Course.
Token classification task guide
[DistilBertForMaskedLM] is supported by this example script and notebook.
[TFDistilBertForMaskedLM] is supported by this example script and notebook.
[FlaxDistilBertForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the ๐ค Hugging Face Course.
Masked language modeling task guide
[DistilBertForQuestionAnswering] is supported by this example script and notebook.
[TFDistilBertForQuestionAnswering] is supported by this example script and notebook.
[FlaxDistilBertForQuestionAnswering] is supported by this example script.
Question answering chapter of the ๐ค Hugging Face Course.
Question answering task guide
Multiple choice
- [DistilBertForMultipleChoice] is supported by this example script and notebook.
- [TFDistilBertForMultipleChoice] is supported by this example script and notebook.
- Multiple choice task guide
โ๏ธ Optimization
A blog post on how to quantize DistilBERT with ๐ค Optimum and Intel.
A blog post on how Optimizing Transformers for GPUs with ๐ค Optimum.
A blog post on Optimizing Transformers with Hugging Face Optimum.
โก๏ธ Inference
A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia with DistilBERT.
A blog post on Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker.
๐ Deploy
A blog post on how to deploy DistilBERT on Google Cloud.
A blog post on how to deploy DistilBERT with Amazon SageMaker.
A blog post on how to Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module.
Combining DistilBERT and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. torch.float16)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon
import torch
from transformers import AutoTokenizer, AutoModel
device = "cuda" # the device to load the model onto
tokenizer = AutoTokenizer.from_pretrained('distilbert/distilbert-base-uncased')
model = AutoModel.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt').to(device)
model.to(device)
output = model(**encoded_input)
DistilBertConfig
[[autodoc]] DistilBertConfig
DistilBertTokenizer
[[autodoc]] DistilBertTokenizer
DistilBertTokenizerFast
[[autodoc]] DistilBertTokenizerFast
DistilBertModel
[[autodoc]] DistilBertModel
- forward
DistilBertForMaskedLM
[[autodoc]] DistilBertForMaskedLM
- forward
DistilBertForSequenceClassification
[[autodoc]] DistilBertForSequenceClassification
- forward
DistilBertForMultipleChoice
[[autodoc]] DistilBertForMultipleChoice
- forward
DistilBertForTokenClassification
[[autodoc]] DistilBertForTokenClassification
- forward
DistilBertForQuestionAnswering
[[autodoc]] DistilBertForQuestionAnswering
- forward
TFDistilBertModel
[[autodoc]] TFDistilBertModel
- call
TFDistilBertForMaskedLM
[[autodoc]] TFDistilBertForMaskedLM
- call
TFDistilBertForSequenceClassification
[[autodoc]] TFDistilBertForSequenceClassification
- call
TFDistilBertForMultipleChoice
[[autodoc]] TFDistilBertForMultipleChoice
- call
TFDistilBertForTokenClassification
[[autodoc]] TFDistilBertForTokenClassification
- call
TFDistilBertForQuestionAnswering
[[autodoc]] TFDistilBertForQuestionAnswering
- call
FlaxDistilBertModel
[[autodoc]] FlaxDistilBertModel
- call
FlaxDistilBertForMaskedLM
[[autodoc]] FlaxDistilBertForMaskedLM
- call
FlaxDistilBertForSequenceClassification
[[autodoc]] FlaxDistilBertForSequenceClassification
- call
FlaxDistilBertForMultipleChoice
[[autodoc]] FlaxDistilBertForMultipleChoice
- call
FlaxDistilBertForTokenClassification
[[autodoc]] FlaxDistilBertForTokenClassification
- call
FlaxDistilBertForQuestionAnswering
[[autodoc]] FlaxDistilBertForQuestionAnswering
- call
|
OpenAI GPT
Overview
OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training
by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It's a causal (unidirectional) transformer
pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus.
The abstract from the paper is the following:
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering,
semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant,
labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to
perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a
language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In
contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve
effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our
approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms
discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon
the state of the art in 9 out of the 12 tasks studied.
Write With Transformer is a webapp created and hosted by Hugging Face
showcasing the generative capabilities of several models. GPT is one of them.
This model was contributed by thomwolf. The original code can be found here.
Usage tips
GPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
observed in the run_generation.py example script.
Note:
If you want to reproduce the original tokenization process of the OpenAI GPT paper, you will need to install ftfy
and SpaCy:
pip install spacy ftfy==4.4.3
python -m spacy download en
If you don't install ftfy and SpaCy, the [OpenAIGPTTokenizer] will default to tokenize
using BERT's BasicTokenizer followed by Byte-Pair Encoding (which should be fine for most usage, don't worry).
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with OpenAI GPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on outperforming OpenAI GPT-3 with SetFit for text-classification.
See also: Text classification task guide
A blog on how to Finetune a non-English GPT-2 Model with Hugging Face.
A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2.
A blog on Training CodeParrot ๐ฆ from Scratch, a large GPT-2 model.
A blog on Faster Text Generation with TensorFlow and XLA with GPT-2.
A blog on How to train a Language Model with Megatron-LM with a GPT-2 model.
A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. ๐
A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. ๐
Causal language modeling chapter of the ๐ค Hugging Face Course.
[OpenAIGPTLMHeadModel] is supported by this causal language modeling example script, text generation example script and notebook.
[TFOpenAIGPTLMHeadModel] is supported by this causal language modeling example script and notebook.
See also: Causal language modeling task guide
A course material on Byte-Pair Encoding tokenization.
OpenAIGPTConfig
[[autodoc]] OpenAIGPTConfig
OpenAIGPTTokenizer
[[autodoc]] OpenAIGPTTokenizer
- save_vocabulary
OpenAIGPTTokenizerFast
[[autodoc]] OpenAIGPTTokenizerFast
OpenAI specific outputs
[[autodoc]] models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput
[[autodoc]] models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput
OpenAIGPTModel
[[autodoc]] OpenAIGPTModel
- forward
OpenAIGPTLMHeadModel
[[autodoc]] OpenAIGPTLMHeadModel
- forward
OpenAIGPTDoubleHeadsModel
[[autodoc]] OpenAIGPTDoubleHeadsModel
- forward
OpenAIGPTForSequenceClassification
[[autodoc]] OpenAIGPTForSequenceClassification
- forward
TFOpenAIGPTModel
[[autodoc]] TFOpenAIGPTModel
- call
TFOpenAIGPTLMHeadModel
[[autodoc]] TFOpenAIGPTLMHeadModel
- call
TFOpenAIGPTDoubleHeadsModel
[[autodoc]] TFOpenAIGPTDoubleHeadsModel
- call
TFOpenAIGPTForSequenceClassification
[[autodoc]] TFOpenAIGPTForSequenceClassification
- call
|
LeViT
Overview
The LeViT model was proposed in LeViT: Introducing Convolutions to Vision Transformers by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. LeViT improves the Vision Transformer (ViT) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information.
The abstract from the paper is the following:
*We design a family of image classification architectures that optimize the trade-off between accuracy
and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures,
which are competitive on highly parallel processing hardware. We revisit principles from the extensive
literature on convolutional neural networks to apply them to transformers, in particular activation maps
with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information
in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification.
We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of
application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable
to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect
to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. *
LeViT Architecture. Taken from the original paper.
This model was contributed by anugunj. The original code can be found here.
Usage tips
Compared to ViT, LeViT models use an additional distillation head to effectively learn from a teacher (which, in the LeViT paper, is a ResNet like-model). The distillation head is learned through backpropagation under supervision of a ResNet like-model. They also draw inspiration from convolution neural networks to use activation maps with decreasing resolutions to increase the efficiency.
There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state and not using the distillation head, or (2) by placing both a prediction head and distillation
head on top of the final hidden state. In that case, the prediction head is trained using regular cross-entropy between
the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation
(cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time,
one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation",
because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds
to [LevitForImageClassification] and (2) corresponds to [LevitForImageClassificationWithTeacher].
All released checkpoints were pre-trained and fine-tuned on ImageNet-1k
(also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
The authors of LeViT released 5 trained LeViT models, which you can directly plug into [LevitModel] or [LevitForImageClassification].
Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224):
facebook/levit-128S, facebook/levit-128, facebook/levit-192, facebook/levit-256 and
facebook/levit-384. Note that one should use [LevitImageProcessor] in order to
prepare images for the model.
[LevitForImageClassificationWithTeacher] currently supports only inference and not training or fine-tuning.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here
(you can just replace [ViTFeatureExtractor] by [LevitImageProcessor] and [ViTForImageClassification] by [LevitForImageClassification] or [LevitForImageClassificationWithTeacher]).
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with LeViT.
[LevitForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LevitConfig
[[autodoc]] LevitConfig
LevitFeatureExtractor
[[autodoc]] LevitFeatureExtractor
- call
LevitImageProcessor
[[autodoc]] LevitImageProcessor
- preprocess
LevitModel
[[autodoc]] LevitModel
- forward
LevitForImageClassification
[[autodoc]] LevitForImageClassification
- forward
LevitForImageClassificationWithTeacher
[[autodoc]] LevitForImageClassificationWithTeacher
- forward |
MobileNet V2
Overview
The MobileNet model was proposed in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
The abstract from the paper is the following:
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.
The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.
This model was contributed by matthijs. The original code and weights can be found here for the main model and here for DeepLabV3+.
Usage tips
The checkpoints are named mobilenet_v2_depth_size, for example mobilenet_v2_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and 224 is the resolution of the input images the model was trained on.
Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
One can use [MobileNetV2ImageProcessor] to prepare images for the model.
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra โbackgroundโ class (index 0).
The segmentation model uses a DeepLabV3+ head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [MobileNetV2Config] with tf_padding = False.
Unsupported features:
The [MobileNetV2Model] outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this.
The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
It's common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell [MobileNetV2Model] up to which layer it should run.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with MobileNetV2.
[MobileNetV2ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Semantic segmentation
- Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileNetV2Config
[[autodoc]] MobileNetV2Config
MobileNetV2FeatureExtractor
[[autodoc]] MobileNetV2FeatureExtractor
- preprocess
- post_process_semantic_segmentation
MobileNetV2ImageProcessor
[[autodoc]] MobileNetV2ImageProcessor
- preprocess
- post_process_semantic_segmentation
MobileNetV2Model
[[autodoc]] MobileNetV2Model
- forward
MobileNetV2ForImageClassification
[[autodoc]] MobileNetV2ForImageClassification
- forward
MobileNetV2ForSemanticSegmentation
[[autodoc]] MobileNetV2ForSemanticSegmentation
- forward |
GPT-J
Overview
The GPT-J model was released in the kingoflolz/mesh-transformer-jax repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like
causal language model trained on the Pile dataset.
This model was contributed by Stella Biderman.
Usage tips
To load GPT-J in float32 one would need at least 2x model size
RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB
RAM to just load the model. To reduce the RAM usage there are a few options. The torch_dtype argument can be
used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights,
which could be used to further minimize the RAM usage:
thon
from transformers import GPTJForCausalLM
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
).to(device)
The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam
optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients.
So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This
is not including the activations and data batches, which would again require some more GPU RAM. So one should explore
solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to
train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for
that could be found here
Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab
size, the tokenizer for GPT-J contains 143 extra tokens
<|extratoken_1|> <|extratoken_143|>, so the vocab_size of tokenizer also becomes 50400.
Usage examples
The [~generation.GenerationMixin.generate] method can be used to generate text using GPT-J
model.
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
or in float16 precision:
thon
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with GPT-J. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Description of GPT-J.
A blog on how to Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker.
A blog on how to Accelerate GPT-J inference with DeepSpeed-Inference on GPUs.
A blog post introducing GPT-J-6B: 6B JAX-Based Transformer. ๐
A notebook for GPT-J-6B Inference Demo. ๐
Another notebook demonstrating Inference with GPT-J-6B.
Causal language modeling chapter of the ๐ค Hugging Face Course.
[GPTJForCausalLM] is supported by this causal language modeling example script, text generation example script, and notebook.
[TFGPTJForCausalLM] is supported by this causal language modeling example script and notebook.
[FlaxGPTJForCausalLM] is supported by this causal language modeling example script and notebook.
Documentation resources
- Text classification task guide
- Question answering task guide
- Causal language modeling task guide
GPTJConfig
[[autodoc]] GPTJConfig
- all
GPTJModel
[[autodoc]] GPTJModel
- forward
GPTJForCausalLM
[[autodoc]] GPTJForCausalLM
- forward
GPTJForSequenceClassification
[[autodoc]] GPTJForSequenceClassification
- forward
GPTJForQuestionAnswering
[[autodoc]] GPTJForQuestionAnswering
- forward
TFGPTJModel
[[autodoc]] TFGPTJModel
- call
TFGPTJForCausalLM
[[autodoc]] TFGPTJForCausalLM
- call
TFGPTJForSequenceClassification
[[autodoc]] TFGPTJForSequenceClassification
- call
TFGPTJForQuestionAnswering
[[autodoc]] TFGPTJForQuestionAnswering
- call
FlaxGPTJModel
[[autodoc]] FlaxGPTJModel
- call
FlaxGPTJForCausalLM
[[autodoc]] FlaxGPTJForCausalLM
- call
|
MobileViT
Overview
The MobileViT model was proposed in MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers.
The abstract from the paper is the following:
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
This model was contributed by matthijs. The TensorFlow version of the model was contributed by sayakpaul. The original code and weights can be found here.
Usage tips
MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow this tutorial for a lightweight introduction.
One can use [MobileViTImageProcessor] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB).
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes).
The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with TensorFlow Lite.
You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a
TensorFlow Lite model:
from transformers import TFMobileViTForImageClassification
import tensorflow as tf
model_ckpt = "apple/mobilevit-xx-small"
model = TFMobileViTForImageClassification.from_pretrained(model_ckpt)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
tflite_model = converter.convert()
tflite_filename = model_ckpt.split("/")[-1] + ".tflite"
with open(tflite_filename, "wb") as f:
f.write(tflite_model)
The resulting model will be just about an MB making it a good fit for mobile applications where resources and network
bandwidth can be constrained.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with MobileViT.
[MobileViTForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
Semantic segmentation
- Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileViTConfig
[[autodoc]] MobileViTConfig
MobileViTFeatureExtractor
[[autodoc]] MobileViTFeatureExtractor
- call
- post_process_semantic_segmentation
MobileViTImageProcessor
[[autodoc]] MobileViTImageProcessor
- preprocess
- post_process_semantic_segmentation
MobileViTModel
[[autodoc]] MobileViTModel
- forward
MobileViTForImageClassification
[[autodoc]] MobileViTForImageClassification
- forward
MobileViTForSemanticSegmentation
[[autodoc]] MobileViTForSemanticSegmentation
- forward
TFMobileViTModel
[[autodoc]] TFMobileViTModel
- call
TFMobileViTForImageClassification
[[autodoc]] TFMobileViTForImageClassification
- call
TFMobileViTForSemanticSegmentation
[[autodoc]] TFMobileViTForSemanticSegmentation
- call
|
XLM
Overview
The XLM model was proposed in Cross-lingual Language Model Pretraining by
Guillaume Lample, Alexis Conneau. It's a transformer pretrained using one of the following objectives:
a causal language modeling (CLM) objective (next token prediction),
a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT's MLM to multiple language inputs)
The abstract from the paper is the following:
Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding.
In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We
propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual
data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain
state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our
approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we
obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised
machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the
previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.
This model was contributed by thomwolf. The original code can be found here.
Usage tips
XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to
select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).
XLM has multilingual checkpoints which leverage a specific lang parameter. Check out the multi-lingual page for more information.
A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them:
Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages.
Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens.
A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XLMConfig
[[autodoc]] XLMConfig
XLMTokenizer
[[autodoc]] XLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XLM specific outputs
[[autodoc]] models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput
XLMModel
[[autodoc]] XLMModel
- forward
XLMWithLMHeadModel
[[autodoc]] XLMWithLMHeadModel
- forward
XLMForSequenceClassification
[[autodoc]] XLMForSequenceClassification
- forward
XLMForMultipleChoice
[[autodoc]] XLMForMultipleChoice
- forward
XLMForTokenClassification
[[autodoc]] XLMForTokenClassification
- forward
XLMForQuestionAnsweringSimple
[[autodoc]] XLMForQuestionAnsweringSimple
- forward
XLMForQuestionAnswering
[[autodoc]] XLMForQuestionAnswering
- forward
TFXLMModel
[[autodoc]] TFXLMModel
- call
TFXLMWithLMHeadModel
[[autodoc]] TFXLMWithLMHeadModel
- call
TFXLMForSequenceClassification
[[autodoc]] TFXLMForSequenceClassification
- call
TFXLMForMultipleChoice
[[autodoc]] TFXLMForMultipleChoice
- call
TFXLMForTokenClassification
[[autodoc]] TFXLMForTokenClassification
- call
TFXLMForQuestionAnsweringSimple
[[autodoc]] TFXLMForQuestionAnsweringSimple
- call
|
LongT5
Overview
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long Sequences
by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It's an
encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of
T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2)
Transient-Global attention.
The abstract from the paper is the following:
Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the
performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we
explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated
attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training
(PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global}
(TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are
able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on
question answering tasks.
This model was contributed by stancld.
The original code can be found here.
Usage tips
[LongT5ForConditionalGeneration] is an extension of [T5ForConditionalGeneration] exchanging the traditional
encoder self-attention layer with efficient either local attention or transient-global (tglobal) attention.
Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective
inspired by the pre-training of [PegasusForConditionalGeneration].
LongT5 model is designed to work efficiently and very well on long-range sequence-to-sequence tasks where the
input sequence exceeds commonly used 512 tokens. It is capable of handling input sequences of a length up to 16,384 tokens.
For Local Attention, the sparse sliding-window local attention operation allows a given token to attend only r
tokens to the left and right of it (with r=127 by default). Local Attention does not introduce any new parameters
to the model. The complexity of the mechanism is linear in input sequence length l: O(l*r).
Transient Global Attention is an extension of the Local Attention. It, furthermore, allows each input token to
interact with all other tokens in the layer. This is achieved via splitting an input sequence into blocks of a fixed
length k (with a default k=16). Then, a global token for such a block is obtained via summing and normalizing the embeddings of every token
in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and
also every global token like in the case of standard global attention (transient represents the fact the global tokens
are constructed dynamically within each attention operation). As a consequence, TGlobal attention introduces
a few new parameters -- global relative position biases and a layer normalization for global token's embedding.
The complexity of this mechanism is O(l(r + l/k)).
An example showing how to evaluate a fine-tuned LongT5 model on the pubmed dataset is below.
thon
import evaluate
from datasets import load_dataset
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
dataset = load_dataset("scientific_papers", "pubmed", split="validation")
model = (
LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
.to("cuda")
.half()
)
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
def generate_answers(batch):
inputs_dict = tokenizer(
batch["article"], max_length=16384, padding="max_length", truncation=True, return_tensors="pt"
)
input_ids = inputs_dict.input_ids.to("cuda")
attention_mask = inputs_dict.attention_mask.to("cuda")
output_ids = model.generate(input_ids, attention_mask=attention_mask, max_length=512, num_beams=2)
batch["predicted_abstract"] = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
return batch
result = dataset.map(generate_answer, batched=True, batch_size=2)
rouge = evaluate.load("rouge")
rouge.compute(predictions=result["predicted_abstract"], references=result["abstract"])
Resources
Translation task guide
Summarization task guide
LongT5Config
[[autodoc]] LongT5Config
LongT5Model
[[autodoc]] LongT5Model
- forward
LongT5ForConditionalGeneration
[[autodoc]] LongT5ForConditionalGeneration
- forward
LongT5EncoderModel
[[autodoc]] LongT5EncoderModel
- forward
FlaxLongT5Model
[[autodoc]] FlaxLongT5Model
- call
- encode
- decode
FlaxLongT5ForConditionalGeneration
[[autodoc]] FlaxLongT5ForConditionalGeneration
- call
- encode
- decode
|
CPM
Overview
The CPM model was proposed in CPM: A Large-scale Generative Chinese Pre-trained Language Model by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin,
Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen,
Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
The abstract from the paper is the following:
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3,
with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even
zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus
of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the
Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best
of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained
language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation,
cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many
NLP tasks in the settings of few-shot (even zero-shot) learning.
This model was contributed by canwenxu. The original implementation can be found
here: https://github.com/TsinghuaAI/CPM-Generate
CPM's architecture is the same as GPT-2, except for tokenization method. Refer to GPT-2 documentation for
API reference information.
CpmTokenizer
[[autodoc]] CpmTokenizer
CpmTokenizerFast
[[autodoc]] CpmTokenizerFast |
PatchTST
Overview
The PatchTST model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.
At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is illustrated in the following figure:
The abstract from the paper is the following:
We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy.
This model was contributed by namctin, gsinthong, diepi, vijaye12, wmgifford, and kashif. The original code can be found here.
Usage tips
The model can also be used for time series classification and time series regression. See the respective [PatchTSTForClassification] and [PatchTSTForRegression] classes.
Resources
A blog post explaining PatchTST in depth can be found here. The blog can also be opened in Google Colab.
PatchTSTConfig
[[autodoc]] PatchTSTConfig
PatchTSTModel
[[autodoc]] PatchTSTModel
- forward
PatchTSTForPrediction
[[autodoc]] PatchTSTForPrediction
- forward
PatchTSTForClassification
[[autodoc]] PatchTSTForClassification
- forward
PatchTSTForPretraining
[[autodoc]] PatchTSTForPretraining
- forward
PatchTSTForRegression
[[autodoc]] PatchTSTForRegression
- forward |
Longformer
Overview
The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA.
This model was contributed by beltagy. The Authors' code can be found here.
Usage tips
Since the Longformer is based on RoBERTa, it doesn't have token_type_ids. You don't need to indicate which
token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or
</s>).
A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the local attention section for more information.
Longformer Self Attention
Longformer self attention employs self attention on both a "local" context and a "global" context. Most tokens only
attend "locally" to each other meaning that each token attends to its \(\frac{1}{2} w\) previous tokens and
\(\frac{1}{2} w\) succeeding tokens with \(w\) being the window length as defined in
config.attention_window. Note that config.attention_window can be of type List to define a
different \(w\) for each layer. A selected few tokens attend "globally" to all other tokens, as it is
conventionally done for all tokens in BertSelfAttention.
Note that "locally" and "globally" attending tokens are projected by different query, key and value matrices. Also note
that every "locally" attending token not only attends to tokens within its window \(w\), but also to all "globally"
attending tokens so that global attention is symmetric.
The user can define which tokens attend "locally" and which tokens attend "globally" by setting the tensor
global_attention_mask at run-time appropriately. All Longformer models employ the following logic for
global_attention_mask:
0: the token attends "locally",
1: the token attends "globally".
For more information please also refer to [~LongformerModel.forward] method.
Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually
represents the memory and time bottleneck, can be reduced from \(\mathcal{O}(n_s \times n_s)\) to
\(\mathcal{O}(n_s \times w)\), with \(n_s\) being the sequence length and \(w\) being the average window
size. It is assumed that the number of "globally" attending tokens is insignificant as compared to the number of
"locally" attending tokens.
For more information, please refer to the official paper.
Training
[LongformerForMaskedLM] is trained the exact same way [RobertaForMaskedLM] is
trained and should be used as follows:
thon
input_ids = tokenizer.encode("This is a sentence from [MASK] training data", return_tensors="pt")
mlm_labels = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0]
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
LongformerConfig
[[autodoc]] LongformerConfig
LongformerTokenizer
[[autodoc]] LongformerTokenizer
LongformerTokenizerFast
[[autodoc]] LongformerTokenizerFast
Longformer specific outputs
[[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling
[[autodoc]] models.longformer.modeling_longformer.LongformerMaskedLMOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerSequenceClassifierOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput
[[autodoc]] models.longformer.modeling_longformer.LongformerTokenClassifierOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutputWithPooling
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput
[[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput
LongformerModel
[[autodoc]] LongformerModel
- forward
LongformerForMaskedLM
[[autodoc]] LongformerForMaskedLM
- forward
LongformerForSequenceClassification
[[autodoc]] LongformerForSequenceClassification
- forward
LongformerForMultipleChoice
[[autodoc]] LongformerForMultipleChoice
- forward
LongformerForTokenClassification
[[autodoc]] LongformerForTokenClassification
- forward
LongformerForQuestionAnswering
[[autodoc]] LongformerForQuestionAnswering
- forward
TFLongformerModel
[[autodoc]] TFLongformerModel
- call
TFLongformerForMaskedLM
[[autodoc]] TFLongformerForMaskedLM
- call
TFLongformerForQuestionAnswering
[[autodoc]] TFLongformerForQuestionAnswering
- call
TFLongformerForSequenceClassification
[[autodoc]] TFLongformerForSequenceClassification
- call
TFLongformerForTokenClassification
[[autodoc]] TFLongformerForTokenClassification
- call
TFLongformerForMultipleChoice
[[autodoc]] TFLongformerForMultipleChoice
- call
|
GroupViT
Overview
The GroupViT model was proposed in GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
Inspired by CLIP, GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
The abstract from the paper is the following:
Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
This model was contributed by xvjiarui. The TensorFlow version was contributed by ariG23498 with the help of Yih-Dar SHIEH, Amy Roberts, and Joao Gante.
The original code can be found here.
Usage tips
You may specify output_segmentation=True in the forward of GroupViTModel to get the segmentation logits of input texts.
Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with GroupViT.
The quickest way to get started with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference).
One can also check out the HuggingFace Spaces demo to play with GroupViT.
GroupViTConfig
[[autodoc]] GroupViTConfig
- from_text_vision_configs
GroupViTTextConfig
[[autodoc]] GroupViTTextConfig
GroupViTVisionConfig
[[autodoc]] GroupViTVisionConfig
GroupViTModel
[[autodoc]] GroupViTModel
- forward
- get_text_features
- get_image_features
GroupViTTextModel
[[autodoc]] GroupViTTextModel
- forward
GroupViTVisionModel
[[autodoc]] GroupViTVisionModel
- forward
TFGroupViTModel
[[autodoc]] TFGroupViTModel
- call
- get_text_features
- get_image_features
TFGroupViTTextModel
[[autodoc]] TFGroupViTTextModel
- call
TFGroupViTVisionModel
[[autodoc]] TFGroupViTVisionModel
- call
|
Pix2Struct
Overview
The Pix2Struct model was proposed in Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
The abstract from the paper is the following:
Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images.
Tips:
Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper.
We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on.
If you want to use the model to perform conditional text captioning, make sure to use the processor with add_special_tokens=False.
This model was contributed by ybelkada.
The original code can be found here.
Resources
Fine-tuning Notebook
All models
Pix2StructConfig
[[autodoc]] Pix2StructConfig
- from_text_vision_configs
Pix2StructTextConfig
[[autodoc]] Pix2StructTextConfig
Pix2StructVisionConfig
[[autodoc]] Pix2StructVisionConfig
Pix2StructProcessor
[[autodoc]] Pix2StructProcessor
Pix2StructImageProcessor
[[autodoc]] Pix2StructImageProcessor
- preprocess
Pix2StructTextModel
[[autodoc]] Pix2StructTextModel
- forward
Pix2StructVisionModel
[[autodoc]] Pix2StructVisionModel
- forward
Pix2StructForConditionalGeneration
[[autodoc]] Pix2StructForConditionalGeneration
- forward |
Custom Layers and Utilities
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
Most of those are only useful if you are studying the code of the models in the library.
Pytorch custom modules
[[autodoc]] pytorch_utils.Conv1D
[[autodoc]] modeling_utils.PoolerStartLogits
- forward
[[autodoc]] modeling_utils.PoolerEndLogits
- forward
[[autodoc]] modeling_utils.PoolerAnswerClass
- forward
[[autodoc]] modeling_utils.SquadHeadOutput
[[autodoc]] modeling_utils.SQuADHead
- forward
[[autodoc]] modeling_utils.SequenceSummary
- forward
PyTorch Helper Functions
[[autodoc]] pytorch_utils.apply_chunking_to_forward
[[autodoc]] pytorch_utils.find_pruneable_heads_and_indices
[[autodoc]] pytorch_utils.prune_layer
[[autodoc]] pytorch_utils.prune_conv1d_layer
[[autodoc]] pytorch_utils.prune_linear_layer
TensorFlow custom layers
[[autodoc]] modeling_tf_utils.TFConv1D
[[autodoc]] modeling_tf_utils.TFSequenceSummary
TensorFlow loss functions
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss
TensorFlow Helper Functions
[[autodoc]] modeling_tf_utils.get_initializer
[[autodoc]] modeling_tf_utils.keras_serializable
[[autodoc]] modeling_tf_utils.shape_list |
General Utilities
This page lists all of Transformers general utility functions that are found in the file utils.py.
Most of those are only useful if you are studying the general code in the library.
Enums and namedtuples
[[autodoc]] utils.ExplicitEnum
[[autodoc]] utils.PaddingStrategy
[[autodoc]] utils.TensorType
Special Decorators
[[autodoc]] utils.add_start_docstrings
[[autodoc]] utils.add_start_docstrings_to_model_forward
[[autodoc]] utils.add_end_docstrings
[[autodoc]] utils.add_code_sample_docstrings
[[autodoc]] utils.replace_return_docstrings
Special Properties
[[autodoc]] utils.cached_property
Other Utilities
[[autodoc]] utils._LazyModule |
Utilities for FeatureExtractors
This page lists all the utility functions that can be used by the audio [FeatureExtractor] in order to compute special features from a raw audio using common algorithms such as Short Time Fourier Transform or log mel spectrogram.
Most of those are only useful if you are studying the code of the audio processors in the library.
Audio Transformations
[[autodoc]] audio_utils.hertz_to_mel
[[autodoc]] audio_utils.mel_to_hertz
[[autodoc]] audio_utils.mel_filter_bank
[[autodoc]] audio_utils.optimal_fft_length
[[autodoc]] audio_utils.window_function
[[autodoc]] audio_utils.spectrogram
[[autodoc]] audio_utils.power_to_db
[[autodoc]] audio_utils.amplitude_to_db |
Utilities for Generation
This page lists all the utility functions used by [~generation.GenerationMixin.generate].
Generate Outputs
The output of [~generation.GenerationMixin.generate] is an instance of a subclass of
[~utils.ModelOutput]. This output is a data structure containing all the information returned
by [~generation.GenerationMixin.generate], but that can also be used as tuple or dictionary.
Here's an example:
thon
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
The generation_output object is a [~generation.GenerateDecoderOnlyOutput], as we can
see in the documentation of that class below, it means it has the following attributes:
sequences: the generated sequences of tokens
scores (optional): the prediction scores of the language modelling head, for each generation step
hidden_states (optional): the hidden states of the model, for each generation step
attentions (optional): the attention weights of the model, for each generation step
Here we have the scores since we passed along output_scores=True, but we don't have hidden_states and
attentions because we didn't pass output_hidden_states=True or output_attentions=True.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get None. Here for instance generation_output.scores are all the generated prediction scores of the
language modeling head, and generation_output.attentions is None.
When using our generation_output object as a tuple, it only keeps the attributes that don't have None values.
Here, for instance, it has two elements, loss then logits, so
python
generation_output[:2]
will return the tuple (generation_output.sequences, generation_output.scores) for instance.
When using our generation_output object as a dictionary, it only keeps the attributes that don't have None
values. Here, for instance, it has two keys that are sequences and scores.
We document here all output types.
PyTorch
[[autodoc]] generation.GenerateDecoderOnlyOutput
[[autodoc]] generation.GenerateEncoderDecoderOutput
[[autodoc]] generation.GenerateBeamDecoderOnlyOutput
[[autodoc]] generation.GenerateBeamEncoderDecoderOutput
TensorFlow
[[autodoc]] generation.TFGreedySearchEncoderDecoderOutput
[[autodoc]] generation.TFGreedySearchDecoderOnlyOutput
[[autodoc]] generation.TFSampleEncoderDecoderOutput
[[autodoc]] generation.TFSampleDecoderOnlyOutput
[[autodoc]] generation.TFBeamSearchEncoderDecoderOutput
[[autodoc]] generation.TFBeamSearchDecoderOnlyOutput
[[autodoc]] generation.TFBeamSampleEncoderDecoderOutput
[[autodoc]] generation.TFBeamSampleDecoderOnlyOutput
[[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput
[[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput
FLAX
[[autodoc]] generation.FlaxSampleOutput
[[autodoc]] generation.FlaxGreedySearchOutput
[[autodoc]] generation.FlaxBeamSearchOutput
LogitsProcessor
A [LogitsProcessor] can be used to modify the prediction scores of a language model head for
generation.
PyTorch
[[autodoc]] AlternatingCodebooksLogitsProcessor
- call
[[autodoc]] ClassifierFreeGuidanceLogitsProcessor
- call
[[autodoc]] EncoderNoRepeatNGramLogitsProcessor
- call
[[autodoc]] EncoderRepetitionPenaltyLogitsProcessor
- call
[[autodoc]] EpsilonLogitsWarper
- call
[[autodoc]] EtaLogitsWarper
- call
[[autodoc]] ExponentialDecayLengthPenalty
- call
[[autodoc]] ForcedBOSTokenLogitsProcessor
- call
[[autodoc]] ForcedEOSTokenLogitsProcessor
- call
[[autodoc]] ForceTokensLogitsProcessor
- call
[[autodoc]] HammingDiversityLogitsProcessor
- call
[[autodoc]] InfNanRemoveLogitsProcessor
- call
[[autodoc]] LogitNormalization
- call
[[autodoc]] LogitsProcessor
- call
[[autodoc]] LogitsProcessorList
- call
[[autodoc]] LogitsWarper
- call
[[autodoc]] MinLengthLogitsProcessor
- call
[[autodoc]] MinNewTokensLengthLogitsProcessor
- call
[[autodoc]] NoBadWordsLogitsProcessor
- call
[[autodoc]] NoRepeatNGramLogitsProcessor
- call
[[autodoc]] PrefixConstrainedLogitsProcessor
- call
[[autodoc]] RepetitionPenaltyLogitsProcessor
- call
[[autodoc]] SequenceBiasLogitsProcessor
- call
[[autodoc]] SuppressTokensAtBeginLogitsProcessor
- call
[[autodoc]] SuppressTokensLogitsProcessor
- call
[[autodoc]] TemperatureLogitsWarper
- call
[[autodoc]] TopKLogitsWarper
- call
[[autodoc]] TopPLogitsWarper
- call
[[autodoc]] TypicalLogitsWarper
- call
[[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor
- call
[[autodoc]] WhisperTimeStampLogitsProcessor
- call
TensorFlow
[[autodoc]] TFForcedBOSTokenLogitsProcessor
- call
[[autodoc]] TFForcedEOSTokenLogitsProcessor
- call
[[autodoc]] TFForceTokensLogitsProcessor
- call
[[autodoc]] TFLogitsProcessor
- call
[[autodoc]] TFLogitsProcessorList
- call
[[autodoc]] TFLogitsWarper
- call
[[autodoc]] TFMinLengthLogitsProcessor
- call
[[autodoc]] TFNoBadWordsLogitsProcessor
- call
[[autodoc]] TFNoRepeatNGramLogitsProcessor
- call
[[autodoc]] TFRepetitionPenaltyLogitsProcessor
- call
[[autodoc]] TFSuppressTokensAtBeginLogitsProcessor
- call
[[autodoc]] TFSuppressTokensLogitsProcessor
- call
[[autodoc]] TFTemperatureLogitsWarper
- call
[[autodoc]] TFTopKLogitsWarper
- call
[[autodoc]] TFTopPLogitsWarper
- call
FLAX
[[autodoc]] FlaxForcedBOSTokenLogitsProcessor
- call
[[autodoc]] FlaxForcedEOSTokenLogitsProcessor
- call
[[autodoc]] FlaxForceTokensLogitsProcessor
- call
[[autodoc]] FlaxLogitsProcessor
- call
[[autodoc]] FlaxLogitsProcessorList
- call
[[autodoc]] FlaxLogitsWarper
- call
[[autodoc]] FlaxMinLengthLogitsProcessor
- call
[[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor
- call
[[autodoc]] FlaxSuppressTokensLogitsProcessor
- call
[[autodoc]] FlaxTemperatureLogitsWarper
- call
[[autodoc]] FlaxTopKLogitsWarper
- call
[[autodoc]] FlaxTopPLogitsWarper
- call
[[autodoc]] FlaxWhisperTimeStampLogitsProcessor
- call
StoppingCriteria
A [StoppingCriteria] can be used to change when to stop generation (other than EOS token). Please note that this is exclusively available to our PyTorch implementations.
[[autodoc]] StoppingCriteria
- call
[[autodoc]] StoppingCriteriaList
- call
[[autodoc]] MaxLengthCriteria
- call
[[autodoc]] MaxTimeCriteria
- call
Constraints
A [Constraint] can be used to force the generation to include specific tokens or sequences in the output. Please note that this is exclusively available to our PyTorch implementations.
[[autodoc]] Constraint
[[autodoc]] PhrasalConstraint
[[autodoc]] DisjunctiveConstraint
[[autodoc]] ConstraintListState
BeamSearch
[[autodoc]] BeamScorer
- process
- finalize
[[autodoc]] BeamSearchScorer
- process
- finalize
[[autodoc]] ConstrainedBeamSearchScorer
- process
- finalize
Utilities
[[autodoc]] top_k_top_p_filtering
[[autodoc]] tf_top_k_top_p_filtering
Streamers
[[autodoc]] TextStreamer
[[autodoc]] TextIteratorStreamer
Caches
[[autodoc]] Cache
- update
[[autodoc]] DynamicCache
- update
- get_seq_length
- reorder_cache
- to_legacy_cache
- from_legacy_cache
[[autodoc]] SinkCache
- update
- get_seq_length
- reorder_cache
[[autodoc]] StaticCache
- update
- get_seq_length |
Time Series Utilities
This page lists all the utility functions and classes that can be used for Time Series based models.
Most of those are only useful if you are studying the code of the time series models or you wish to add to the collection of distributional output classes.
Distributional Output
[[autodoc]] time_series_utils.NormalOutput
[[autodoc]] time_series_utils.StudentTOutput
[[autodoc]] time_series_utils.NegativeBinomialOutput |
Utilities for Tokenizers
This page lists all the utility functions used by the tokenizers, mainly the class
[~tokenization_utils_base.PreTrainedTokenizerBase] that implements the common methods between
[PreTrainedTokenizer] and [PreTrainedTokenizerFast] and the mixin
[~tokenization_utils_base.SpecialTokensMixin].
Most of those are only useful if you are studying the code of the tokenizers in the library.
PreTrainedTokenizerBase
[[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase
- call
- all
SpecialTokensMixin
[[autodoc]] tokenization_utils_base.SpecialTokensMixin
Enums and namedtuples
[[autodoc]] tokenization_utils_base.TruncationStrategy
[[autodoc]] tokenization_utils_base.CharSpan
[[autodoc]] tokenization_utils_base.TokenSpan |
Utilities for Image Processors
This page lists all the utility functions used by the image processors, mainly the functional
transformations used to process the images.
Most of those are only useful if you are studying the code of the image processors in the library.
Image Transformations
[[autodoc]] image_transforms.center_crop
[[autodoc]] image_transforms.center_to_corners_format
[[autodoc]] image_transforms.corners_to_center_format
[[autodoc]] image_transforms.id_to_rgb
[[autodoc]] image_transforms.normalize
[[autodoc]] image_transforms.pad
[[autodoc]] image_transforms.rgb_to_id
[[autodoc]] image_transforms.rescale
[[autodoc]] image_transforms.resize
[[autodoc]] image_transforms.to_pil_image
ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin |
Utilities for Trainer
This page lists all the utility functions used by [Trainer].
Most of those are only useful if you are studying the code of the Trainer in the library.
Utilities
[[autodoc]] EvalPrediction
[[autodoc]] IntervalStrategy
[[autodoc]] enable_full_determinism
[[autodoc]] set_seed
[[autodoc]] torch_distributed_zero_first
Callbacks internals
[[autodoc]] trainer_callback.CallbackHandler
Distributed Evaluation
[[autodoc]] trainer_pt_utils.DistributedTensorGatherer
Trainer Argument Parser
[[autodoc]] HfArgumentParser
Debug Utilities
[[autodoc]] debug_utils.DebugUnderflowOverflow |
Utilities for pipelines
This page lists all the utility functions the library provides for pipelines.
Most of those are only useful if you are studying the code of the models in the library.
Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
Utilities
[[autodoc]] pipelines.PipelineException |
Agents & Tools
Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents
can vary as the APIs or underlying models are prone to change.
To learn more about agents and tools make sure to read the introductory guide. This page
contains the API docs for the underlying classes.
Agents
We provide three types of agents: [HfAgent] uses inference endpoints for opensource models, [LocalAgent] uses a model of your choice locally and [OpenAiAgent] uses OpenAI closed models.
HfAgent
[[autodoc]] HfAgent
LocalAgent
[[autodoc]] LocalAgent
OpenAiAgent
[[autodoc]] OpenAiAgent
AzureOpenAiAgent
[[autodoc]] AzureOpenAiAgent
Agent
[[autodoc]] Agent
- chat
- run
- prepare_for_new_chat
Tools
load_tool
[[autodoc]] load_tool
Tool
[[autodoc]] Tool
PipelineTool
[[autodoc]] PipelineTool
RemoteTool
[[autodoc]] RemoteTool
launch_gradio_demo
[[autodoc]] launch_gradio_demo
Agent Types
Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return
text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to
correctly render these returns in ipython (jupyter, colab, ipython notebooks, ), we implement wrapper classes
around these types.
The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image
object should still behave as a PIL.Image.
These types have three specific purposes:
Calling to_raw on the type should return the underlying object
Calling to_string on the type should return the object as a string: that can be the string in case of an AgentText
but will be the path of the serialized version of the object in other instances
Displaying it in an ipython kernel should display the object correctly
AgentText
[[autodoc]] transformers.tools.agent_types.AgentText
AgentImage
[[autodoc]] transformers.tools.agent_types.AgentImage
AgentAudio
[[autodoc]] transformers.tools.agent_types.AgentAudio |
Feature Extractor
A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction from sequences, e.g., pre-processing audio files to generate Log-Mel Spectrogram features, feature extraction from images, e.g., cropping image files, but also padding, normalization, and conversion to NumPy, PyTorch, and TensorFlow tensors.
FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
BatchFeature
[[autodoc]] BatchFeature
ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin |
Generation
Each framework has a generate method for text generation implemented in their respective GenerationMixin class:
PyTorch [~generation.GenerationMixin.generate] is implemented in [~generation.GenerationMixin].
TensorFlow [~generation.TFGenerationMixin.generate] is implemented in [~generation.TFGenerationMixin].
Flax/JAX [~generation.FlaxGenerationMixin.generate] is implemented in [~generation.FlaxGenerationMixin].
Regardless of your framework of choice, you can parameterize the generate method with a [~generation.GenerationConfig]
class instance. Please refer to this class for the complete list of generation parameters, which control the behavior
of the generation method.
To learn how to inspect a model's generation configuration, what are the defaults, how to change the parameters ad hoc,
and how to create and save a customized generation configuration, refer to the
text generation strategies guide. The guide also explains how to use related features,
like token streaming.
GenerationConfig
[[autodoc]] generation.GenerationConfig
- from_pretrained
- from_model_config
- save_pretrained
GenerationMixin
[[autodoc]] generation.GenerationMixin
- generate
- compute_transition_scores
TFGenerationMixin
[[autodoc]] generation.TFGenerationMixin
- generate
- compute_transition_scores
FlaxGenerationMixin
[[autodoc]] generation.FlaxGenerationMixin
- generate |
Tokenizer
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
Rust library ๐ค Tokenizers. The "Fast" implementations allows:
a significant speed-up in particular when doing batched tokenization and
additional methods to map between the original string (character and words) and the token space (e.g. getting the
index of the token comprising a given character or the span of characters corresponding to a given token).
The base classes [PreTrainedTokenizer] and [PreTrainedTokenizerFast]
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
(downloaded from HuggingFace's AWS S3 repository). They both rely on
[~tokenization_utils_base.PreTrainedTokenizerBase] that contains the common methods, and
[~tokenization_utils_base.SpecialTokensMixin].
[PreTrainedTokenizer] and [PreTrainedTokenizerFast] thus implement the main
methods for using all the tokenizers:
Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
encoding/decoding (i.e., tokenizing and converting to integers).
Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece).
Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
tokenizer for easy access and making sure they are not split during tokenization.
[BatchEncoding] holds the output of the
[~tokenization_utils_base.PreTrainedTokenizerBase]'s encoding methods (__call__,
encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by
these methods (input_ids, attention_mask). When the tokenizer is a "Fast" tokenizer (i.e., backed by
HuggingFace tokenizers library), this class provides in addition
several advanced alignment methods which can be used to map between the original string (character and words) and the
token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding
to a given token).
PreTrainedTokenizer
[[autodoc]] PreTrainedTokenizer
- call
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
PreTrainedTokenizerFast
The [PreTrainedTokenizerFast] depend on the tokenizers library. The tokenizers obtained from the ๐ค tokenizers library can be
loaded very simply into ๐ค transformers. Take a look at the Using tokenizers from ๐ค tokenizers page to understand how this is done.
[[autodoc]] PreTrainedTokenizerFast
- call
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
BatchEncoding
[[autodoc]] BatchEncoding |
Optimization
The .optimization module provides:
an optimizer with weight decay fixed that can be used to fine-tuned models, and
several schedules in the form of schedule objects that inherit from _LRSchedule:
a gradient accumulation class to accumulate the gradients of multiple batches
AdamW (PyTorch)
[[autodoc]] AdamW
AdaFactor (PyTorch)
[[autodoc]] Adafactor
AdamWeightDecay (TensorFlow)
[[autodoc]] AdamWeightDecay
[[autodoc]] create_optimizer
Schedules
Learning Rate Schedules (Pytorch)
[[autodoc]] SchedulerType
[[autodoc]] get_scheduler
[[autodoc]] get_constant_schedule
[[autodoc]] get_constant_schedule_with_warmup
[[autodoc]] get_cosine_schedule_with_warmup
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
[[autodoc]] get_linear_schedule_with_warmup
[[autodoc]] get_polynomial_decay_schedule_with_warmup
[[autodoc]] get_inverse_sqrt_schedule
Warmup (TensorFlow)
[[autodoc]] WarmUp
Gradient Strategies
GradientAccumulator (TensorFlow)
[[autodoc]] GradientAccumulator |
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only.
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it:
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint |
Pipelines
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of
the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity
Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the
task summary for examples of use.
There are two categories of pipeline abstractions to be aware about:
The [pipeline] which is the most powerful object encapsulating all other pipelines.
Task-specific pipelines are available for audio, computer vision, natural language processing, and multimodal tasks.
The pipeline abstraction
The pipeline abstraction is a wrapper around all the other available pipelines. It is instantiated as any other
pipeline but can provide additional quality of life.
Simple call on one item:
thon
pipe = pipeline("text-classification")
pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
If you want to use a specific model from the hub you can ignore the task if the model on
the hub already defines it:
thon
pipe = pipeline(model="FacebookAI/roberta-large-mnli")
pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
To call a pipeline on many items, you can call it with a list.
thon
pipe = pipeline("text-classification")
pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
To iterate over full datasets it is recommended to use a dataset directly. This means you don't need to allocate
the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on
GPU. If it doesn't don't hesitate to create an issue.
thon
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
KeyDataset (only pt) will simply return the item in the dict returned by the dataset item
as we're not interested in the target part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": .}
# .
For ease of use, a generator is also possible:
thon
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
# This could come from a dataset, a database, a queue or HTTP request
# in a server
# Caveat: because this is iterative, you cannot use num_workers > 1 variable
# to use multiple threads to preprocess data. You can still have 1 thread that
# does the preprocessing while the main runs the big inference
yield "This is a test"
for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": .}
# .
[[autodoc]] pipeline
Pipeline batching
All pipelines can use batching. This will work
whenever the pipeline uses its streaming ability (so when passing lists or Dataset or generator).
thon
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending
on hardware, data and the actual model being used.
Example where it's mostly a speedup:
thon
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def len(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
On GTX 970
Streaming no batching
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:26<00:00, 187.52it/s]
Streaming batch_size=8
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:04<00:00, 1205.95it/s]
Streaming batch_size=64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:02<00:00, 2478.24it/s]
Streaming batch_size=256
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
Example where it's most a slowdown:
thon
class MyDataset(Dataset):
def len(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
This is a occasional very long sentence compared to the other. In that case, the whole batch will need to be 400
tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on
bigger batches, the program simply crashes.
Streaming no batching
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:05<00:00, 183.69it/s]
Streaming batch_size=8
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:03<00:00, 265.74it/s]
Streaming batch_size=64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1000/1000 [00:26<00:00, 37.80it/s]
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
.
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of
thumb:
For users, a rule of thumb is:
Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the
only way to go.
If you are latency constrained (live product doing inference), don't batch.
If you are using CPU, don't batch.
If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and
try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't
control the sequence_length.)
If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push
it until you get OOMs.
The larger the GPU the more likely batching is going to be more interesting
As soon as you enable batching, make sure you can handle OOMs nicely.
Pipeline chunk batching
zero-shot-classification and question-answering are slightly specific in the sense, that a single input might yield
multiple forward pass of a model. Under normal circumstances, this would yield issues with batch_size argument.
In order to circumvent this issue, both of these pipelines are a bit specific, they are ChunkPipeline instead of
regular Pipeline. In short:
python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
Now becomes:
python
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
This should be very transparent to your code because the pipelines are used in
the same way.
This is a simplified view, since the pipeline can handle automatically the batch to ! Meaning you don't have to care
about how many forward passes you inputs are actually going to trigger, you can optimize the batch_size
independently of the inputs. The caveats from the previous section still apply.
Pipeline custom code
If you want to override a specific pipeline.
Don't hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most
cases, so transformers could maybe support your use case.
If you want to try simply you can:
Subclass your pipeline of choice
thon
class MyPipeline(TextClassificationPipeline):
def postprocess():
# Your code goes here
scores = scores * 100
# And here
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, )
or if you use pipeline function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
That should enable you to do all the custom code you want.
Implementing a pipeline
Implementing a new pipeline
Audio
Pipelines available for audio tasks include the following.
AudioClassificationPipeline
[[autodoc]] AudioClassificationPipeline
- call
- all
AutomaticSpeechRecognitionPipeline
[[autodoc]] AutomaticSpeechRecognitionPipeline
- call
- all
TextToAudioPipeline
[[autodoc]] TextToAudioPipeline
- call
- all
ZeroShotAudioClassificationPipeline
[[autodoc]] ZeroShotAudioClassificationPipeline
- call
- all
Computer vision
Pipelines available for computer vision tasks include the following.
DepthEstimationPipeline
[[autodoc]] DepthEstimationPipeline
- call
- all
ImageClassificationPipeline
[[autodoc]] ImageClassificationPipeline
- call
- all
ImageSegmentationPipeline
[[autodoc]] ImageSegmentationPipeline
- call
- all
ImageToImagePipeline
[[autodoc]] ImageToImagePipeline
- call
- all
ObjectDetectionPipeline
[[autodoc]] ObjectDetectionPipeline
- call
- all
VideoClassificationPipeline
[[autodoc]] VideoClassificationPipeline
- call
- all
ZeroShotImageClassificationPipeline
[[autodoc]] ZeroShotImageClassificationPipeline
- call
- all
ZeroShotObjectDetectionPipeline
[[autodoc]] ZeroShotObjectDetectionPipeline
- call
- all
Natural Language Processing
Pipelines available for natural language processing tasks include the following.
ConversationalPipeline
[[autodoc]] Conversation
[[autodoc]] ConversationalPipeline
- call
- all
FillMaskPipeline
[[autodoc]] FillMaskPipeline
- call
- all
QuestionAnsweringPipeline
[[autodoc]] QuestionAnsweringPipeline
- call
- all
SummarizationPipeline
[[autodoc]] SummarizationPipeline
- call
- all
TableQuestionAnsweringPipeline
[[autodoc]] TableQuestionAnsweringPipeline
- call
TextClassificationPipeline
[[autodoc]] TextClassificationPipeline
- call
- all
TextGenerationPipeline
[[autodoc]] TextGenerationPipeline
- call
- all
Text2TextGenerationPipeline
[[autodoc]] Text2TextGenerationPipeline
- call
- all
TokenClassificationPipeline
[[autodoc]] TokenClassificationPipeline
- call
- all
TranslationPipeline
[[autodoc]] TranslationPipeline
- call
- all
ZeroShotClassificationPipeline
[[autodoc]] ZeroShotClassificationPipeline
- call
- all
Multimodal
Pipelines available for multimodal tasks include the following.
DocumentQuestionAnsweringPipeline
[[autodoc]] DocumentQuestionAnsweringPipeline
- call
- all
FeatureExtractionPipeline
[[autodoc]] FeatureExtractionPipeline
- call
- all
ImageFeatureExtractionPipeline
[[autodoc]] ImageFeatureExtractionPipeline
- call
- all
ImageToTextPipeline
[[autodoc]] ImageToTextPipeline
- call
- all
MaskGenerationPipeline
[[autodoc]] MaskGenerationPipeline
- call
- all
VisualQuestionAnsweringPipeline
[[autodoc]] VisualQuestionAnsweringPipeline
- call
- all
Parent class: Pipeline
[[autodoc]] Pipeline |
Keras callbacks
When training a Transformers model with Keras, there are some library-specific callbacks available to automate common
tasks:
KerasMetricCallback
[[autodoc]] KerasMetricCallback
PushToHubCallback
[[autodoc]] PushToHubCallback |
Model outputs
All models have outputs that are instances of subclasses of [~utils.ModelOutput]. Those are
data structures containing all the information returned by the model, but that can also be used as tuples or
dictionaries.
Let's see how this looks in an example:
thon
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
The outputs object is a [~modeling_outputs.SequenceClassifierOutput], as we can see in the
documentation of that class below, it means it has an optional loss, a logits, an optional hidden_states and
an optional attentions attribute. Here we have the loss since we passed along labels, but we don't have
hidden_states and attentions because we didn't pass output_hidden_states=True or
output_attentions=True.
When passing output_hidden_states=True you may expect the outputs.hidden_states[-1] to match outputs.last_hidden_states exactly.
However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it's returned.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get None. Here for instance outputs.loss is the loss computed by the model, and outputs.attentions is
None.
When considering our outputs object as tuple, it only considers the attributes that don't have None values.
Here for instance, it has two elements, loss then logits, so
python
outputs[:2]
will return the tuple (outputs.loss, outputs.logits) for instance.
When considering our outputs object as dictionary, it only considers the attributes that don't have None
values. Here for instance, it has two keys that are loss and logits.
We document here the generic model outputs that are used by more than one model type. Specific output types are
documented on their corresponding model page.
ModelOutput
[[autodoc]] utils.ModelOutput
- to_tuple
BaseModelOutput
[[autodoc]] modeling_outputs.BaseModelOutput
BaseModelOutputWithPooling
[[autodoc]] modeling_outputs.BaseModelOutputWithPooling
BaseModelOutputWithCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions
BaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
BaseModelOutputWithPast
[[autodoc]] modeling_outputs.BaseModelOutputWithPast
BaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
Seq2SeqModelOutput
[[autodoc]] modeling_outputs.Seq2SeqModelOutput
CausalLMOutput
[[autodoc]] modeling_outputs.CausalLMOutput
CausalLMOutputWithCrossAttentions
[[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions
CausalLMOutputWithPast
[[autodoc]] modeling_outputs.CausalLMOutputWithPast
MaskedLMOutput
[[autodoc]] modeling_outputs.MaskedLMOutput
Seq2SeqLMOutput
[[autodoc]] modeling_outputs.Seq2SeqLMOutput
NextSentencePredictorOutput
[[autodoc]] modeling_outputs.NextSentencePredictorOutput
SequenceClassifierOutput
[[autodoc]] modeling_outputs.SequenceClassifierOutput
Seq2SeqSequenceClassifierOutput
[[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput
MultipleChoiceModelOutput
[[autodoc]] modeling_outputs.MultipleChoiceModelOutput
TokenClassifierOutput
[[autodoc]] modeling_outputs.TokenClassifierOutput
QuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.QuestionAnsweringModelOutput
Seq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
Seq2SeqSpectrogramOutput
[[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput
SemanticSegmenterOutput
[[autodoc]] modeling_outputs.SemanticSegmenterOutput
ImageClassifierOutput
[[autodoc]] modeling_outputs.ImageClassifierOutput
ImageClassifierOutputWithNoAttention
[[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention
DepthEstimatorOutput
[[autodoc]] modeling_outputs.DepthEstimatorOutput
Wav2Vec2BaseModelOutput
[[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput
XVectorOutput
[[autodoc]] modeling_outputs.XVectorOutput
Seq2SeqTSModelOutput
[[autodoc]] modeling_outputs.Seq2SeqTSModelOutput
Seq2SeqTSPredictionOutput
[[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput
SampleTSPredictionOutput
[[autodoc]] modeling_outputs.SampleTSPredictionOutput
TFBaseModelOutput
[[autodoc]] modeling_tf_outputs.TFBaseModelOutput
TFBaseModelOutputWithPooling
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling
TFBaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
TFBaseModelOutputWithPast
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast
TFBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
TFSeq2SeqModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput
TFCausalLMOutput
[[autodoc]] modeling_tf_outputs.TFCausalLMOutput
TFCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
TFCausalLMOutputWithPast
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast
TFMaskedLMOutput
[[autodoc]] modeling_tf_outputs.TFMaskedLMOutput
TFSeq2SeqLMOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput
TFNextSentencePredictorOutput
[[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput
TFSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput
TFSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
TFMultipleChoiceModelOutput
[[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput
TFTokenClassifierOutput
[[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput
TFQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput
TFSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
FlaxBaseModelOutput
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput
FlaxBaseModelOutputWithPast
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast
FlaxBaseModelOutputWithPooling
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling
FlaxBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
FlaxSeq2SeqModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput
FlaxCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
FlaxMaskedLMOutput
[[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput
FlaxSeq2SeqLMOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput
FlaxNextSentencePredictorOutput
[[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput
FlaxSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput
FlaxSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
FlaxMultipleChoiceModelOutput
[[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput
FlaxTokenClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput
FlaxQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
FlaxSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput |
Processors
Processors can mean two different things in the Transformers library:
- the objects that pre-process inputs for multi-modal models such as Wav2Vec2 (speech and text)
or CLIP (text and vision)
- deprecated objects that were used in older versions of the library to preprocess data for GLUE or SQUAD.
Multi-modal processors
Any multi-modal model will require an object to encode or decode the data that groups several modalities (among text,
vision and audio). This is handled by objects called processors, which group together two or more processing objects
such as tokenizers (for the text modality), image processors (for vision) and feature extractors (for audio).
Those processors inherit from the following base class that implements the saving and loading functionality:
[[autodoc]] ProcessorMixin
Deprecated processors
All processors follow the same architecture which is that of the
[~data.processors.utils.DataProcessor]. The processor returns a list of
[~data.processors.utils.InputExample]. These
[~data.processors.utils.InputExample] can be converted to
[~data.processors.utils.InputFeatures] in order to be fed to the model.
[[autodoc]] data.processors.utils.DataProcessor
[[autodoc]] data.processors.utils.InputExample
[[autodoc]] data.processors.utils.InputFeatures
GLUE
General Language Understanding Evaluation (GLUE) is a benchmark that evaluates the
performance of models across a diverse set of existing NLU tasks. It was released together with the paper GLUE: A
multi-task benchmark and analysis platform for natural language understanding
This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB,
QQP, QNLI, RTE and WNLI.
Those processors are:
[~data.processors.utils.MrpcProcessor]
[~data.processors.utils.MnliProcessor]
[~data.processors.utils.MnliMismatchedProcessor]
[~data.processors.utils.Sst2Processor]
[~data.processors.utils.StsbProcessor]
[~data.processors.utils.QqpProcessor]
[~data.processors.utils.QnliProcessor]
[~data.processors.utils.RteProcessor]
[~data.processors.utils.WnliProcessor]
Additionally, the following method can be used to load values from a data file and convert them to a list of
[~data.processors.utils.InputExample].
[[autodoc]] data.processors.glue.glue_convert_examples_to_features
XNLI
The Cross-Lingual NLI Corpus (XNLI) is a benchmark that evaluates the
quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on MultiNLI: pairs of text are labeled with textual entailment annotations for 15
different languages (including both high-resource language such as English and low-resource languages such as Swahili).
It was released together with the paper XNLI: Evaluating Cross-lingual Sentence Representations
This library hosts the processor to load the XNLI data:
[~data.processors.utils.XnliProcessor]
Please note that since the gold labels are available on the test set, evaluation is performed on the test set.
An example using these processors is given in the run_xnli.py script.
SQuAD
The Stanford Question Answering Dataset (SQuAD) is a benchmark that
evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version
(v1.1) was released together with the paper SQuAD: 100,000+ Questions for Machine Comprehension of Text. The second version (v2.0) was released alongside the paper Know What You Don't
Know: Unanswerable Questions for SQuAD.
This library hosts a processor for each of the two versions:
Processors
Those processors are:
[~data.processors.utils.SquadV1Processor]
[~data.processors.utils.SquadV2Processor]
They both inherit from the abstract class [~data.processors.utils.SquadProcessor]
[[autodoc]] data.processors.squad.SquadProcessor
- all
Additionally, the following method can be used to convert SQuAD examples into
[~data.processors.utils.SquadFeatures] that can be used as model inputs.
[[autodoc]] data.processors.squad.squad_convert_examples_to_features
These processors as well as the aforementioned method can be used with files containing the data as well as with the
tensorflow_datasets package. Examples are given below.
Example usage
Here is an example using the processors as well as the conversion method using data files:
thon
Loading a V2 processor
processor = SquadV2Processor()
examples = processor.get_dev_examples(squad_v2_data_dir)
Loading a V1 processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(squad_v1_data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
Using tensorflow_datasets is as easy as using a data file:
thon
tensorflow_datasets only handle Squad V1.
tfds_examples = tfds.load("squad")
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
Another example using these processors is given in the run_squad.py script. |
Trainer
The [Trainer] class provides an API for feature-complete training in PyTorch, and it supports distributed training on multiple GPUs/TPUs, mixed precision for NVIDIA GPUs, AMD GPUs, and torch.amp for PyTorch. [Trainer] goes hand-in-hand with the [TrainingArguments] class, which offers a wide range of options to customize how a model is trained. Together, these two classes provide a complete training API.
[Seq2SeqTrainer] and [Seq2SeqTrainingArguments] inherit from the [Trainer] and [TrainingArgument] classes and they're adapted for training models for sequence-to-sequence tasks such as summarization or translation.
The [Trainer] class is optimized for ๐ค Transformers models and can have surprising behaviors
when used with other models. When using it with your own model, make sure:
your model always return tuples or subclasses of [~utils.ModelOutput]
your model can compute the loss if a labels argument is provided and that loss is returned as the first
element of the tuple (if your model returns tuples)
your model can accept multiple label arguments (use label_names in [TrainingArguments] to indicate their name to the [Trainer]) but none of them should be named "label"
Trainer[[api-reference]]
[[autodoc]] Trainer
- all
Seq2SeqTrainer
[[autodoc]] Seq2SeqTrainer
- evaluate
- predict
TrainingArguments
[[autodoc]] TrainingArguments
- all
Seq2SeqTrainingArguments
[[autodoc]] Seq2SeqTrainingArguments
- all |
Data Collator
Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of
the same type as the elements of train_dataset or eval_dataset.
To be able to build batches, data collators may apply some processing (like padding). Some of them (like
[DataCollatorForLanguageModeling]) also apply some random data augmentation (like random masking)
on the formed batch.
Examples of use can be found in the example scripts or example notebooks.
Default data collator
[[autodoc]] data.data_collator.default_data_collator
DefaultDataCollator
[[autodoc]] data.data_collator.DefaultDataCollator
DataCollatorWithPadding
[[autodoc]] data.data_collator.DataCollatorWithPadding
DataCollatorForTokenClassification
[[autodoc]] data.data_collator.DataCollatorForTokenClassification
DataCollatorForSeq2Seq
[[autodoc]] data.data_collator.DataCollatorForSeq2Seq
DataCollatorForLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
DataCollatorForWholeWordMask
[[autodoc]] data.data_collator.DataCollatorForWholeWordMask
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
DataCollatorForPermutationLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens |
DeepSpeed
DeepSpeed, powered by Zero Redundancy Optimizer (ZeRO), is an optimization library for training and fitting very large models onto a GPU. It is available in several ZeRO stages, where each stage progressively saves more GPU memory by partitioning the optimizer state, gradients, parameters, and enabling offloading to a CPU or NVMe. DeepSpeed is integrated with the [Trainer] class and most of the setup is automatically taken care of for you.
However, if you want to use DeepSpeed without the [Trainer], Transformers provides a [HfDeepSpeedConfig] class.
Learn more about using DeepSpeed with [Trainer] in the DeepSpeed guide.
HfDeepSpeedConfig
[[autodoc]] integrations.HfDeepSpeedConfig
- all |
Configuration
The base class [PretrainedConfig] implements the common methods for loading/saving a configuration
either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded
from HuggingFace's AWS S3 repository).
Each derived config class implements model specific attributes. Common attributes present in all config classes are:
hidden_size, num_attention_heads, and num_hidden_layers. Text models further implement:
vocab_size.
PretrainedConfig
[[autodoc]] PretrainedConfig
- push_to_hub
- all |
Logging
๐ค Transformers has a centralized logging system, so that you can setup the verbosity of the library easily.
Currently the default verbosity of the library is WARNING.
To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
to the INFO level.
thon
import transformers
transformers.logging.set_verbosity_info()
You can also use the environment variable TRANSFORMERS_VERBOSITY to override the default verbosity. You can set it
to one of the following: debug, info, warning, error, critical. For example:
TRANSFORMERS_VERBOSITY=error ./myprogram.py
Additionally, some warnings can be disabled by setting the environment variable
TRANSFORMERS_NO_ADVISORY_WARNINGS to a true value, like 1. This will disable any warning that is logged using
[logger.warning_advice]. For example:
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
Here is an example of how to use the same logger as the library in your own module or script:
thon
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("transformers")
logger.info("INFO")
logger.warning("WARN")
All the methods of this logging module are documented below, the main ones are
[logging.get_verbosity] to get the current level of verbosity in the logger and
[logging.set_verbosity] to set the verbosity to the level of your choice. In order (from the least
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
transformers.logging.CRITICAL or transformers.logging.FATAL (int value, 50): only report the most
critical errors.
transformers.logging.ERROR (int value, 40): only report errors.
transformers.logging.WARNING or transformers.logging.WARN (int value, 30): only reports error and
warnings. This the default level used by the library.
transformers.logging.INFO (int value, 20): reports error, warnings and basic information.
transformers.logging.DEBUG (int value, 10): report all information.
By default, tqdm progress bars will be displayed during model download. [logging.disable_progress_bar] and [logging.enable_progress_bar] can be used to suppress or unsuppress this behavior.
logging vs warnings
Python has two logging systems that are often used in conjunction: logging, which is explained above, and warnings,
which allows further classification of warnings in specific buckets, e.g., FutureWarning for a feature or path
that has already been deprecated and DeprecationWarning to indicate an upcoming deprecation.
We use both in the transformers library. We leverage and adapt logging's captureWarning method to allow
management of these warning messages by the verbosity setters above.
What does that mean for developers of the library? We should respect the following heuristic:
- warnings should be favored for developers of the library and libraries dependent on transformers
- logging should be used for end-users of the library using it in every-day projects
See reference of the captureWarnings method below.
[[autodoc]] logging.captureWarnings
Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar |
Image Processor
An image processor is in charge of preparing input features for vision models and post processing their outputs. This includes transformations such as resizing, normalization, and conversion to PyTorch, TensorFlow, Flax and Numpy tensors. It may also include model specific post-processing such as converting logits to segmentation masks.
ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
- from_pretrained
- save_pretrained
BatchFeature
[[autodoc]] BatchFeature
BaseImageProcessor
[[autodoc]] image_processing_utils.BaseImageProcessor |
Callbacks
Callbacks are objects that can customize the behavior of the training loop in the PyTorch
[Trainer] (this feature is not yet implemented in TensorFlow) that can inspect the training loop
state (for progress reporting, logging on TensorBoard or other ML platforms) and take decisions (like early
stopping).
Callbacks are "read only" pieces of code, apart from the [TrainerControl] object they return, they
cannot change anything in the training loop. For customizations that require changes in the training loop, you should
subclass [Trainer] and override the methods you need (see trainer for examples).
By default, TrainingArguments.report_to is set to "all", so a [Trainer] will use the following callbacks.
[DefaultFlowCallback] which handles the default behavior for logging, saving and evaluation.
[PrinterCallback] or [ProgressCallback] to display progress and print the
logs (the first one is used if you deactivate tqdm through the [TrainingArguments], otherwise
it's the second one).
[~integrations.TensorBoardCallback] if tensorboard is accessible (either through PyTorch >= 1.4
or tensorboardX).
[~integrations.WandbCallback] if wandb is installed.
[~integrations.CometCallback] if comet_ml is installed.
[~integrations.MLflowCallback] if mlflow is installed.
[~integrations.NeptuneCallback] if neptune is installed.
[~integrations.AzureMLCallback] if azureml-sdk is
installed.
[~integrations.CodeCarbonCallback] if codecarbon is
installed.
[~integrations.ClearMLCallback] if clearml is installed.
[~integrations.DagsHubCallback] if dagshub is installed.
[~integrations.FlyteCallback] if flyte is installed.
[~integrations.DVCLiveCallback] if dvclive is installed.
If a package is installed but you don't wish to use the accompanying integration, you can change TrainingArguments.report_to to a list of just those integrations you want to use (e.g. ["azure_ml", "wandb"]).
The main class that implements callbacks is [TrainerCallback]. It gets the
[TrainingArguments] used to instantiate the [Trainer], can access that
Trainer's internal state via [TrainerState], and can take some actions on the training loop via
[TrainerControl].
Available Callbacks
Here is the list of the available [TrainerCallback] in the library:
[[autodoc]] integrations.CometCallback
- setup
[[autodoc]] DefaultFlowCallback
[[autodoc]] PrinterCallback
[[autodoc]] ProgressCallback
[[autodoc]] EarlyStoppingCallback
[[autodoc]] integrations.TensorBoardCallback
[[autodoc]] integrations.WandbCallback
- setup
[[autodoc]] integrations.MLflowCallback
- setup
[[autodoc]] integrations.AzureMLCallback
[[autodoc]] integrations.CodeCarbonCallback
[[autodoc]] integrations.NeptuneCallback
[[autodoc]] integrations.ClearMLCallback
[[autodoc]] integrations.DagsHubCallback
[[autodoc]] integrations.FlyteCallback
[[autodoc]] integrations.DVCLiveCallback
- setup
TrainerCallback
[[autodoc]] TrainerCallback
Here is an example of how to register a custom callback with the PyTorch [Trainer]:
thon
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training"
def on_train_begin(self, args, state, control, **kwargs):
print("Starting training")
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback())
)
Another way to register a callback is to call trainer.add_callback() as follows:
thon
trainer = Trainer()
trainer.add_callback(MyCallback)
Alternatively, we can pass an instance of the callback class
trainer.add_callback(MyCallback())
TrainerState
[[autodoc]] TrainerState
TrainerControl
[[autodoc]] TrainerControl |
Backbone
A backbone is a model used for feature extraction for higher level computer vision tasks such as object detection and image classification. Transformers provides an [AutoBackbone] class for initializing a Transformers backbone from pretrained model weights, and two utility classes:
[~utils.BackboneMixin] enables initializing a backbone from Transformers or timm and includes functions for returning the output features and indices.
[~utils.BackboneConfigMixin] sets the output features and indices of the backbone configuration.
timm models are loaded with the [TimmBackbone] and [TimmBackboneConfig] classes.
Backbones are supported for the following models:
BEiT
BiT
ConvNet
ConvNextV2
DiNAT
DINOV2
FocalNet
MaskFormer
NAT
ResNet
Swin Transformer
Swin Transformer v2
ViTDet
AutoBackbone
[[autodoc]] AutoBackbone
BackboneMixin
[[autodoc]] utils.BackboneMixin
BackboneConfigMixin
[[autodoc]] utils.BackboneConfigMixin
TimmBackbone
[[autodoc]] models.timm_backbone.TimmBackbone
TimmBackboneConfig
[[autodoc]] models.timm_backbone.TimmBackboneConfig |
Quantization
Quantization techniques reduces memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.
Quantization techniques that aren't supported in Transformers can be added with the [HfQuantizer] class.
Learn how to quantize models in the Quantization guide.
AqlmConfig
[[autodoc]] AqlmConfig
AwqConfig
[[autodoc]] AwqConfig
GPTQConfig
[[autodoc]] GPTQConfig
BitsAndBytesConfig
[[autodoc]] BitsAndBytesConfig
HfQuantizer
[[autodoc]] quantizers.base.HfQuantizer |
Exporting ๐ค Transformers models to ONNX
๐ค Transformers provides a transformers.onnx package that enables you to
convert model checkpoints to an ONNX graph by leveraging configuration objects.
See the guide on exporting ๐ค Transformers models for more
details.
ONNX Configurations
We provide three abstract classes that you should inherit from, depending on the
type of model architecture you wish to export:
Encoder-based models inherit from [~onnx.config.OnnxConfig]
Decoder-based models inherit from [~onnx.config.OnnxConfigWithPast]
Encoder-decoder models inherit from [~onnx.config.OnnxSeq2SeqConfigWithPast]
OnnxConfig
[[autodoc]] onnx.config.OnnxConfig
OnnxConfigWithPast
[[autodoc]] onnx.config.OnnxConfigWithPast
OnnxSeq2SeqConfigWithPast
[[autodoc]] onnx.config.OnnxSeq2SeqConfigWithPast
ONNX Features
Each ONNX configuration is associated with a set of features that enable you
to export models for different types of topologies or tasks.
FeaturesManager
[[autodoc]] onnx.features.FeaturesManager |