Search is not available for this dataset
Models The base classes [PreTrainedModel], [TFPreTrainedModel], and [FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). [PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which are common among all the models to: resize the input token embeddings when new tokens are added to the vocabulary prune the attention heads of the model. The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin] (for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or for text generation, [~generation.GenerationMixin] (for the PyTorch models), [~generation.TFGenerationMixin] (for the TensorFlow models) and [~generation.FlaxGenerationMixin] (for the Flax/JAX models). PreTrainedModel [[autodoc]] PreTrainedModel - push_to_hub - all Large model loading In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded. This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only. from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True) Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect. When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it: from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto") You can inspect how the model was split across devices by looking at its hf_device_map attribute: py t0pp.hf_device_map python out {'shared': 0, 'decoder.embed_tokens': 0, 'encoder': 0, 'decoder.block.0': 0, 'decoder.block.1': 1, 'decoder.block.2': 1, 'decoder.block.3': 1, 'decoder.block.4': 1, 'decoder.block.5': 1, 'decoder.block.6': 1, 'decoder.block.7': 1, 'decoder.block.8': 1, 'decoder.block.9': 1, 'decoder.block.10': 1, 'decoder.block.11': 1, 'decoder.block.12': 1, 'decoder.block.13': 1, 'decoder.block.14': 1, 'decoder.block.15': 1, 'decoder.block.16': 1, 'decoder.block.17': 1, 'decoder.block.18': 1, 'decoder.block.19': 1, 'decoder.block.20': 1, 'decoder.block.21': 1, 'decoder.block.22': 'cpu', 'decoder.block.23': 'cpu', 'decoder.final_layer_norm': 'cpu', 'decoder.dropout': 'cpu', 'lm_head': 'cpu'} You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory): python device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1} Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below. Model Instantiation dtype Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can either explicitly pass the desired dtype using torch_dtype argument: python model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16) or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto", and then dtype will be automatically derived from the model's weights: python model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto") Models instantiated from scratch can also be told which dtype to use with: python config = T5Config.from_pretrained("t5") model = AutoModel.from_config(config) Due to Pytorch design, this functionality is only available for floating dtypes. ModuleUtilsMixin [[autodoc]] modeling_utils.ModuleUtilsMixin TFPreTrainedModel [[autodoc]] TFPreTrainedModel - push_to_hub - all TFModelUtilsMixin [[autodoc]] modeling_tf_utils.TFModelUtilsMixin FlaxPreTrainedModel [[autodoc]] FlaxPreTrainedModel - push_to_hub - all Pushing to the Hub [[autodoc]] utils.PushToHubMixin Sharded checkpoints [[autodoc]] modeling_utils.load_sharded_checkpoint
stringlengths
196
74k
ConvBERT Overview The ConvBERT model was proposed in ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. The abstract from the paper is the following: Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained models will be released. ConvBERT training tips are similar to those of BERT. This model was contributed by abhishek. The original implementation can be found here: https://github.com/yitu-opensource/ConvBert Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide ConvBertConfig [[autodoc]] ConvBertConfig ConvBertTokenizer [[autodoc]] ConvBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ConvBertTokenizerFast [[autodoc]] ConvBertTokenizerFast ConvBertModel [[autodoc]] ConvBertModel - forward ConvBertForMaskedLM [[autodoc]] ConvBertForMaskedLM - forward ConvBertForSequenceClassification [[autodoc]] ConvBertForSequenceClassification - forward ConvBertForMultipleChoice [[autodoc]] ConvBertForMultipleChoice - forward ConvBertForTokenClassification [[autodoc]] ConvBertForTokenClassification - forward ConvBertForQuestionAnswering [[autodoc]] ConvBertForQuestionAnswering - forward TFConvBertModel [[autodoc]] TFConvBertModel - call TFConvBertForMaskedLM [[autodoc]] TFConvBertForMaskedLM - call TFConvBertForSequenceClassification [[autodoc]] TFConvBertForSequenceClassification - call TFConvBertForMultipleChoice [[autodoc]] TFConvBertForMultipleChoice - call TFConvBertForTokenClassification [[autodoc]] TFConvBertForTokenClassification - call TFConvBertForQuestionAnswering [[autodoc]] TFConvBertForQuestionAnswering - call
Donut Overview The Donut model was proposed in OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering. The abstract from the paper is the following: Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains. Donut high-level overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Tips: The quickest way to get started with Donut is by checking the tutorial notebooks, which show how to use the model at inference time as well as fine-tuning on custom data. Donut is always used within the VisionEncoderDecoder framework. Inference Donut's [VisionEncoderDecoder] model accepts images as input and makes use of [~generation.GenerationMixin.generate] to autoregressively generate text given the input image. The [DonutImageProcessor] class is responsible for preprocessing the input image and [XLMRobertaTokenizer/XLMRobertaTokenizerFast] decodes the generated target tokens to the target string. The [DonutProcessor] wraps [DonutImageProcessor] and [XLMRobertaTokenizer/XLMRobertaTokenizerFast] into a single instance to both extract the input features and decode the predicted token ids. Step-by-step Document Image Classification import re from transformers import DonutProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT load document image dataset = load_dataset("hf-internal-testing/example-documents", split="test") image = dataset[1]["image"] prepare decoder inputs task_prompt = "" decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token print(processor.token2json(sequence)) {'class': 'advertisement'} Step-by-step Document Parsing import re from transformers import DonutProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT load document image dataset = load_dataset("hf-internal-testing/example-documents", split="test") image = dataset[2]["image"] prepare decoder inputs task_prompt = "" decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token print(processor.token2json(sequence)) {'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} Step-by-step Document Visual Question Answering (DocVQA) import re from transformers import DonutProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT load document image from the DocVQA dataset dataset = load_dataset("hf-internal-testing/example-documents", split="test") image = dataset[0]["image"] prepare decoder inputs task_prompt = "{user_input}" question = "When is the coffee break?" prompt = task_prompt.replace("{user_input}", question) decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token print(processor.token2json(sequence)) {'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} See the model hub to look for Donut checkpoints. Training We refer to the tutorial notebooks. DonutSwinConfig [[autodoc]] DonutSwinConfig DonutImageProcessor [[autodoc]] DonutImageProcessor - preprocess DonutFeatureExtractor [[autodoc]] DonutFeatureExtractor - call DonutProcessor [[autodoc]] DonutProcessor - call - from_pretrained - save_pretrained - batch_decode - decode DonutSwinModel [[autodoc]] DonutSwinModel - forward
Bark Overview Bark is a transformer-based text-to-speech model proposed by Suno AI in suno-ai/bark. Bark is made of 4 main models: [BarkSemanticModel] (also referred to as the 'text' model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text. [BarkCoarseModel] (also referred to as the 'coarse acoustics' model): a causal autoregressive transformer, that takes as input the results of the [BarkSemanticModel] model. It aims at predicting the first two audio codebooks necessary for EnCodec. [BarkFineModel] (the 'fine acoustics' model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings. having predicted all the codebook channels from the [EncodecModel], Bark uses it to decode the output audio array. It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice. Tips: Suno offers a library of voice presets in a number of languages here. These presets are also uploaded in the hub here or here. thon from transformers import AutoProcessor, BarkModel processor = AutoProcessor.from_pretrained("suno/bark") model = BarkModel.from_pretrained("suno/bark") voice_preset = "v2/en_speaker_6" inputs = processor("Hello, my dog is cute", voice_preset=voice_preset) audio_array = model.generate(**inputs) audio_array = audio_array.cpu().numpy().squeeze() Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. thon Multilingual speech - simplified Chinese inputs = processor("惊人的!我会说中文") Multilingual speech - French - let's use a voice_preset as well inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5") Bark can also generate music. You can help it out by adding music notes around your lyrics. inputs = processor("♪ Hello, my dog is cute ♪") audio_array = model.generate(**inputs) audio_array = audio_array.cpu().numpy().squeeze() The model can also produce nonverbal communications like laughing, sighing and crying. thon Adding non-speech cues to the input text inputs = processor("Hello uh [clears throat], my dog is cute [laughter]") audio_array = model.generate(**inputs) audio_array = audio_array.cpu().numpy().squeeze() To save the audio, simply take the sample rate from the model config and some scipy utility: thon from scipy.io.wavfile import write as write_wav save audio to disk, but first take the sample rate from the model config sample_rate = model.generation_config.sample_rate write_wav("bark_generation.wav", sample_rate, audio_array) This model was contributed by Yoach Lacombe (ylacombe) and Sanchit Gandhi (sanchit-gandhi). The original code can be found here. BarkConfig [[autodoc]] BarkConfig - all BarkProcessor [[autodoc]] BarkProcessor - all - call BarkModel [[autodoc]] BarkModel - generate BarkSemanticModel [[autodoc]] BarkSemanticModel - forward BarkCoarseModel [[autodoc]] BarkCoarseModel - forward BarkFineModel [[autodoc]] BarkFineModel - forward BarkCausalModel [[autodoc]] BarkCausalModel - forward BarkCoarseConfig [[autodoc]] BarkCoarseConfig - all BarkFineConfig [[autodoc]] BarkFineConfig - all BarkSemanticConfig [[autodoc]] BarkSemanticConfig - all
SAM Overview SAM (Segment Anything Model) was proposed in Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. The model can be used to predict segmentation masks of any object of interest given an input image. The abstract from the paper is the following: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision. Tips: The model predicts binary masks that states the presence or not of the object of interest given an image. The model predicts much better results if input 2D points and/or input bounding boxes are provided You can prompt multiple points for the same image, and predict a single mask. Fine-tuning the model is not supported yet According to the paper, textual input should be also supported. However, at this time of writing this seems to be not supported according to the official repository. This model was contributed by ybelkada and ArthurZ. The original code can be found here. Below is an example on how to run mask generation given an image and a 2D point: thon import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-huge") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) scores = outputs.iou_scores Resources: Demo notebook for using the model. Demo notebook for using the automatic mask generation pipeline. Demo notebook for inference with MedSAM, a fine-tuned version of SAM on the medical domain. Demo notebook for fine-tuning the model on custom data. SamConfig [[autodoc]] SamConfig SamVisionConfig [[autodoc]] SamVisionConfig SamMaskDecoderConfig [[autodoc]] SamMaskDecoderConfig SamPromptEncoderConfig [[autodoc]] SamPromptEncoderConfig SamProcessor [[autodoc]] SamProcessor SamImageProcessor [[autodoc]] SamImageProcessor SamModel [[autodoc]] SamModel - forward TFSamModel [[autodoc]] TFSamModel - call
EfficientFormer Overview The EfficientFormer model was proposed in EfficientFormer: Vision Transformers at MobileNet Speed by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object detection and semantic segmentation. The abstract from the paper is the following: Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. This model was contributed by novice03 and Bearnardd. The original code can be found here. The TensorFlow version of this model was added by D-Roberts. Documentation resources Image classification task guide EfficientFormerConfig [[autodoc]] EfficientFormerConfig EfficientFormerImageProcessor [[autodoc]] EfficientFormerImageProcessor - preprocess EfficientFormerModel [[autodoc]] EfficientFormerModel - forward EfficientFormerForImageClassification [[autodoc]] EfficientFormerForImageClassification - forward EfficientFormerForImageClassificationWithTeacher [[autodoc]] EfficientFormerForImageClassificationWithTeacher - forward TFEfficientFormerModel [[autodoc]] TFEfficientFormerModel - call TFEfficientFormerForImageClassification [[autodoc]] TFEfficientFormerForImageClassification - call TFEfficientFormerForImageClassificationWithTeacher [[autodoc]] TFEfficientFormerForImageClassificationWithTeacher - call
RWKV Overview The RWKV model was proposed in this repo It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below). This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training). This model was contributed by sgugger. The original code can be found here. Example of use as an RNN: import torch from transformers import AutoTokenizer, RwkvConfig, RwkvModel model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile") tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile") inputs = tokenizer("This is an example.", return_tensors="pt") Feed everything to the model outputs = model(inputs["input_ids"]) output_whole = outputs.last_hidden_state outputs = model(inputs["input_ids"][:, :2]) output_one = outputs.last_hidden_state Using the state computed on the first inputs, we will get the same output outputs = model(inputs["input_ids"][:, 2:], state=outputs.state) output_two = outputs.last_hidden_state torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5) RwkvConfig [[autodoc]] RwkvConfig RwkvModel [[autodoc]] RwkvModel - forward RwkvLMHeadModel [[autodoc]] RwkvForCausalLM - forward Rwkv attention and the recurrent formulas In a traditional auto-regressive Transformer, attention is written as $$O = \hbox{softmax}(QK^{T} / \sqrt{d}) V$$ with \(Q\), \(K\) and \(V\) are matrices of shape seq_len x hidden_size named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we're only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product \(QK^{T}\) then has shape seq_len x seq_len and we can take the maxtrix product with \(V\) to get the output \(O\) of the same shape as the others. Replacing the softmax by its value gives: $$O_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}$$ Note that the entries in \(QK^{T}\) corresponding to \(j > i\) are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones). In comparison, the RWKV attention is given by $$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$ where \(R\) is a new matrix called receptance by the author, \(K\) and \(V\) are still the key and value (\(\sigma\) here is the sigmoid function). \(W\) is a new vector that represents the position of the token and is given by $$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$ with \(u\) and \(w\) learnable parameters called in the code time_first and time_decay respectively. The numerator and denominator can both be expressed recursively. Naming them \(N_{i}\) and \(D_{i}\) we have: $$N_{i} = e^{u + K_{i}} V_{i} + \hat{N}{i} \hbox{ where } \hat{N}{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}$$ so \(\hat{N}_{i}\) (called numerator_state in the code) satistfies $$\hat{N}{0} = 0 \hbox{ and } \hat{N}{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}$$ and $$D_{i} = e^{u + K_{i}} + \hat{D}{i} \hbox{ where } \hat{D}{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}$$ so \(\hat{D}_{i}\) (called denominator_state in the code) satistfies $$\hat{D}{0} = 0 \hbox{ and } \hat{D}{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$ The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don't want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: $$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$ with \(M\) the maximum of all \(x_{j}\). So here on top of saving the numerator state (\(\hat{N}\)) and the denominator state (\(\hat{D}\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use $$\tilde{N}{i} = e^{-M{i}} \hat{N}{i} \hbox{ and } \tilde{D}{i} = e^{-M_{i}} \hat{D}_{i}$$ defined by the following recurrent formulas: $$\tilde{N}{0} = 0 \hbox{ and } \tilde{N}{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}{j} \hbox{ where } q = \max(K{j}, w + M_{j})$$ and $$\tilde{D}{0} = 0 \hbox{ and } \tilde{D}{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}{j} \hbox{ where } q = \max(K{j}, w + M_{j})$$ and \(M_{j+1} = q\). With those, we can then compute $$N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}{i} \hbox{ where } q = \max(u + K{i}, M_{i})$$ and $$D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}{i} \hbox{ where } q = \max(u + K{i}, M_{i})$$ which finally gives us $$O_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}$$
ALBERT Overview The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT: Splitting the embedding matrix into two smaller matrices. Using repeating layers split among groups. The abstract from the paper is the following: Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. Tips: ALBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters. Layers are split in groups that share parameters (to save memory). Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not. This model was contributed by lysandre. This model jax version was contributed by kamalkraj. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide AlbertConfig [[autodoc]] AlbertConfig AlbertTokenizer [[autodoc]] AlbertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary AlbertTokenizerFast [[autodoc]] AlbertTokenizerFast Albert specific outputs [[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput [[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput AlbertModel [[autodoc]] AlbertModel - forward AlbertForPreTraining [[autodoc]] AlbertForPreTraining - forward AlbertForMaskedLM [[autodoc]] AlbertForMaskedLM - forward AlbertForSequenceClassification [[autodoc]] AlbertForSequenceClassification - forward AlbertForMultipleChoice [[autodoc]] AlbertForMultipleChoice AlbertForTokenClassification [[autodoc]] AlbertForTokenClassification - forward AlbertForQuestionAnswering [[autodoc]] AlbertForQuestionAnswering - forward TFAlbertModel [[autodoc]] TFAlbertModel - call TFAlbertForPreTraining [[autodoc]] TFAlbertForPreTraining - call TFAlbertForMaskedLM [[autodoc]] TFAlbertForMaskedLM - call TFAlbertForSequenceClassification [[autodoc]] TFAlbertForSequenceClassification - call TFAlbertForMultipleChoice [[autodoc]] TFAlbertForMultipleChoice - call TFAlbertForTokenClassification [[autodoc]] TFAlbertForTokenClassification - call TFAlbertForQuestionAnswering [[autodoc]] TFAlbertForQuestionAnswering - call FlaxAlbertModel [[autodoc]] FlaxAlbertModel - call FlaxAlbertForPreTraining [[autodoc]] FlaxAlbertForPreTraining - call FlaxAlbertForMaskedLM [[autodoc]] FlaxAlbertForMaskedLM - call FlaxAlbertForSequenceClassification [[autodoc]] FlaxAlbertForSequenceClassification - call FlaxAlbertForMultipleChoice [[autodoc]] FlaxAlbertForMultipleChoice - call FlaxAlbertForTokenClassification [[autodoc]] FlaxAlbertForTokenClassification - call FlaxAlbertForQuestionAnswering [[autodoc]] FlaxAlbertForQuestionAnswering - call
EnCodec Overview The EnCodec neural codec model was proposed in High Fidelity Neural Audio Compression by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. The abstract from the paper is the following: We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio. This model was contributed by Matthijs, Patrick Von Platen and Arthur Zucker. The original code can be found here. Here is a quick example of how to encode and decode an audio using this model: thon from datasets import load_dataset, Audio from transformers import EncodecModel, AutoProcessor librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") model = EncodecModel.from_pretrained("facebook/encodec_24khz") processor = AutoProcessor.from_pretrained("facebook/encodec_24khz") librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate)) audio_sample = librispeech_dummy[-1]["audio"]["array"] inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt") encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"]) audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0] or the equivalent with a forward pass audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values EncodecConfig [[autodoc]] EncodecConfig EncodecFeatureExtractor [[autodoc]] EncodecFeatureExtractor - call EncodecModel [[autodoc]] EncodecModel - decode - encode - forward
UniSpeech-SAT Overview The UniSpeech-SAT model was proposed in UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu . The abstract from the paper is the following: Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks. Tips: UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [Wav2Vec2Processor] for the feature extraction. UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. This model was contributed by patrickvonplaten. The Authors' code can be found here. Documentation resources Audio classification task guide Automatic speech recognition task guide UniSpeechSatConfig [[autodoc]] UniSpeechSatConfig UniSpeechSat specific outputs [[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput UniSpeechSatModel [[autodoc]] UniSpeechSatModel - forward UniSpeechSatForCTC [[autodoc]] UniSpeechSatForCTC - forward UniSpeechSatForSequenceClassification [[autodoc]] UniSpeechSatForSequenceClassification - forward UniSpeechSatForAudioFrameClassification [[autodoc]] UniSpeechSatForAudioFrameClassification - forward UniSpeechSatForXVector [[autodoc]] UniSpeechSatForXVector - forward UniSpeechSatForPreTraining [[autodoc]] UniSpeechSatForPreTraining - forward
UniSpeech Overview The UniSpeech model was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang . The abstract from the paper is the following: In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach. Tips: UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [Wav2Vec2Processor] for the feature extraction. UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. This model was contributed by patrickvonplaten. The Authors' code can be found here. Documentation resources Audio classification task guide Automatic speech recognition task guide UniSpeechConfig [[autodoc]] UniSpeechConfig UniSpeech specific outputs [[autodoc]] models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput UniSpeechModel [[autodoc]] UniSpeechModel - forward UniSpeechForCTC [[autodoc]] UniSpeechForCTC - forward UniSpeechForSequenceClassification [[autodoc]] UniSpeechForSequenceClassification - forward UniSpeechForPreTraining [[autodoc]] UniSpeechForPreTraining - forward
BioGPT Overview The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch. The abstract from the paper is the following: Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms. Tips: BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script. The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage. This model was contributed by kamalkraj. The original code can be found here. Documentation resources Causal language modeling task guide BioGptConfig [[autodoc]] BioGptConfig BioGptTokenizer [[autodoc]] BioGptTokenizer - save_vocabulary BioGptModel [[autodoc]] BioGptModel - forward BioGptForCausalLM [[autodoc]] BioGptForCausalLM - forward BioGptForTokenClassification [[autodoc]] BioGptForTokenClassification - forward BioGptForSequenceClassification [[autodoc]] BioGptForSequenceClassification - forward
CANINE Overview The CANINE model was proposed in CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level. Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient downsampling strategy, before applying a deep Transformer encoder. The abstract from the paper is the following: Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters. Tips: CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally, after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and downsampling can be found in the paper. CANINE uses a max sequence length of 2048 characters by default. One can use [CanineTokenizer] to prepare text for the model. Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token (which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The details for this can be found in the paper. Models: google/canine-c: Pre-trained with autoregressive character loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). google/canine-s: Pre-trained with subword loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). This model was contributed by nielsr. The original code can be found here. Example CANINE works on raw characters, so it can be used without a tokenizer: thon from transformers import CanineModel import torch model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss text = "hello world" use Python's built-in ord() function to turn each character into its unicode code point id input_ids = torch.tensor([[ord(char) for char in text]]) outputs = model(input_ids) # forward pass pooled_output = outputs.pooler_output sequence_output = outputs.last_hidden_state For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all sequences to the same length): thon from transformers import CanineTokenizer, CanineModel model = CanineModel.from_pretrained("google/canine-c") tokenizer = CanineTokenizer.from_pretrained("google/canine-c") inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") outputs = model(**encoding) # forward pass pooled_output = outputs.pooler_output sequence_output = outputs.last_hidden_state Documentation resources Text classification task guide Token classification task guide Question answering task guide Multiple choice task guide CANINE specific outputs [[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling CanineConfig [[autodoc]] CanineConfig CanineTokenizer [[autodoc]] CanineTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences CanineModel [[autodoc]] CanineModel - forward CanineForSequenceClassification [[autodoc]] CanineForSequenceClassification - forward CanineForMultipleChoice [[autodoc]] CanineForMultipleChoice - forward CanineForTokenClassification [[autodoc]] CanineForTokenClassification - forward CanineForQuestionAnswering [[autodoc]] CanineForQuestionAnswering - forward
Nyströmformer Overview The Nyströmformer model was proposed in Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. The abstract from the paper is the following: Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences -- a topic being actively studied in the community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL. This model was contributed by novice03. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide NystromformerConfig [[autodoc]] NystromformerConfig NystromformerModel [[autodoc]] NystromformerModel - forward NystromformerForMaskedLM [[autodoc]] NystromformerForMaskedLM - forward NystromformerForSequenceClassification [[autodoc]] NystromformerForSequenceClassification - forward NystromformerForMultipleChoice [[autodoc]] NystromformerForMultipleChoice - forward NystromformerForTokenClassification [[autodoc]] NystromformerForTokenClassification - forward NystromformerForQuestionAnswering [[autodoc]] NystromformerForQuestionAnswering - forward
Nezha Overview The Nezha model was proposed in NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei et al. The abstract from the paper is the following: The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy, Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti) and natural language inference (XNLI). This model was contributed by sijunhe. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide NezhaConfig [[autodoc]] NezhaConfig NezhaModel [[autodoc]] NezhaModel - forward NezhaForPreTraining [[autodoc]] NezhaForPreTraining - forward NezhaForMaskedLM [[autodoc]] NezhaForMaskedLM - forward NezhaForNextSentencePrediction [[autodoc]] NezhaForNextSentencePrediction - forward NezhaForSequenceClassification [[autodoc]] NezhaForSequenceClassification - forward NezhaForMultipleChoice [[autodoc]] NezhaForMultipleChoice - forward NezhaForTokenClassification [[autodoc]] NezhaForTokenClassification - forward NezhaForQuestionAnswering [[autodoc]] NezhaForQuestionAnswering - forward
Video Vision Transformer (ViViT) Overview The Vivit model was proposed in ViViT: A Video Vision Transformer by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. The paper proposes one of the first successful pure-transformer based set of models for video understanding. The abstract from the paper is the following: We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks. This model was contributed by jegormeister. The original code (written in JAX) can be found here. VivitConfig [[autodoc]] VivitConfig VivitImageProcessor [[autodoc]] VivitImageProcessor - preprocess VivitModel [[autodoc]] VivitModel - forward VivitForVideoClassification [[autodoc]] transformers.VivitForVideoClassification - forward
Swin2SR Overview The Swin2SR model was proposed in Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. Swin2R improves the SwinIR model by incorporating Swin Transformer v2 layers which mitigates issues such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. The abstract from the paper is the following: Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video". Swin2SR architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources Demo notebooks for Swin2SR can be found here. A demo Space for image super-resolution with SwinSR can be found here. Swin2SRImageProcessor [[autodoc]] Swin2SRImageProcessor - preprocess Swin2SRConfig [[autodoc]] Swin2SRConfig Swin2SRModel [[autodoc]] Swin2SRModel - forward Swin2SRForImageSuperResolution [[autodoc]] Swin2SRForImageSuperResolution - forward
MobileNet V1 Overview The MobileNet model was proposed in MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. The abstract from the paper is the following: We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. Tips: The checkpoints are named mobilenet_v1_depth_size, for example mobilenet_v1_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and 224 is the resolution of the input images the model was trained on. Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. One can use [MobileNetV1ImageProcessor] to prepare images for the model. The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [MobileNetV1Config] with tf_padding = False. Unsupported features: The [MobileNetV1Model] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this. It is currently not possible to specify an output_stride. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32. The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights. It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers. This model was contributed by matthijs. The original code and weights can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1. [MobileNetV1ForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. MobileNetV1Config [[autodoc]] MobileNetV1Config MobileNetV1FeatureExtractor [[autodoc]] MobileNetV1FeatureExtractor - preprocess MobileNetV1ImageProcessor [[autodoc]] MobileNetV1ImageProcessor - preprocess MobileNetV1Model [[autodoc]] MobileNetV1Model - forward MobileNetV1ForImageClassification [[autodoc]] MobileNetV1ForImageClassification - forward
BORT This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: pip install -U transformers==4.30.0. Overview The BORT model was proposed in Optimal Subarchitecture Extraction for BERT by Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the authors refer to as "Bort". The abstract from the paper is the following: We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. Tips: BORT's model architecture is based on BERT, so one can refer to BERT's documentation page for the model's API as well as usage examples. BORT uses the RoBERTa tokenizer instead of the BERT tokenizer, so one can refer to RoBERTa's documentation page for the tokenizer's API as well as usage examples. BORT requires a specific fine-tuning algorithm, called Agora , that is sadly not open-sourced yet. It would be very useful for the community, if someone tries to implement the algorithm to make BORT fine-tuning work. This model was contributed by stefan-it. The original code can be found here.
GPT-Sw3 Overview The GPT-Sw3 model was first proposed in Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile. GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. This model was contributed by AI Sweden. The implementation uses the GPT2Model coupled with our GPTSw3Tokenizer. This means that AutoTokenizer and AutoModelForCausalLM map to our tokenizer implementation and the corresponding GPT2 model implementation respectively. Note that sentencepiece is required to use our tokenizer and can be installed with: pip install transformers[sentencepiece] or pip install sentencepiece Example usage: thon from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AI-Sweden/gpt-sw3-356m") model = AutoModelForCausalLM.from_pretrained("AI-Sweden/gpt-sw3-356m") input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"] generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0] print(tokenizer.decode(generated_token_ids)) Träd är fina för att de är färgstarka. Men ibland är det fint Documentation resources Text classification task guide Token classification task guide Causal language modeling task guide GPTSw3Tokenizer [[autodoc]] GPTSw3Tokenizer - save_vocabulary
ViTMAE Overview The ViTMAE model was proposed in Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after fine-tuning that outperform supervised pre-training. The abstract from the paper is the following: This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior. Tips: MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple: by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use [ViTMAEForPreTraining] for this purpose. After pre-training, one "throws away" the decoder used to reconstruct pixels, and one uses the encoder for fine-tuning/linear probing. This means that after fine-tuning, one can directly plug in the weights into a [ViTForImageClassification]. One can use [ViTImageProcessor] to prepare images for the model. See the code examples for more info. Note that the encoder of MAE is only used to encode the visual patches. The encoded patches are then concatenated with mask tokens, which the decoder (which also consists of Transformer blocks) takes as input. Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted. Fixed sin/cos position embeddings are added both to the input of the encoder and the decoder. For a visual understanding of how MAEs work you can check out this post. MAE architecture. Taken from the original paper. This model was contributed by nielsr. TensorFlow version of the model was contributed by sayakpaul and ariG23498 (equal contribution). The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMAE. [ViTMAEForPreTraining] is supported by this example script, allowing you to pre-train the model from scratch/further pre-train the model on custom data. A notebook that illustrates how to visualize reconstructed pixel values with [ViTMAEForPreTraining] can be found here. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ViTMAEConfig [[autodoc]] ViTMAEConfig ViTMAEModel [[autodoc]] ViTMAEModel - forward ViTMAEForPreTraining [[autodoc]] transformers.ViTMAEForPreTraining - forward TFViTMAEModel [[autodoc]] TFViTMAEModel - call TFViTMAEForPreTraining [[autodoc]] transformers.TFViTMAEForPreTraining - call
RemBERT Overview The RemBERT model was proposed in Rethinking Embedding Coupling in Pre-trained Language Models by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder. The abstract from the paper is the following: We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage. Tips: For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is also similar to the Albert one rather than the BERT one. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide RemBertConfig [[autodoc]] RemBertConfig RemBertTokenizer [[autodoc]] RemBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RemBertTokenizerFast [[autodoc]] RemBertTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RemBertModel [[autodoc]] RemBertModel - forward RemBertForCausalLM [[autodoc]] RemBertForCausalLM - forward RemBertForMaskedLM [[autodoc]] RemBertForMaskedLM - forward RemBertForSequenceClassification [[autodoc]] RemBertForSequenceClassification - forward RemBertForMultipleChoice [[autodoc]] RemBertForMultipleChoice - forward RemBertForTokenClassification [[autodoc]] RemBertForTokenClassification - forward RemBertForQuestionAnswering [[autodoc]] RemBertForQuestionAnswering - forward TFRemBertModel [[autodoc]] TFRemBertModel - call TFRemBertForMaskedLM [[autodoc]] TFRemBertForMaskedLM - call TFRemBertForCausalLM [[autodoc]] TFRemBertForCausalLM - call TFRemBertForSequenceClassification [[autodoc]] TFRemBertForSequenceClassification - call TFRemBertForMultipleChoice [[autodoc]] TFRemBertForMultipleChoice - call TFRemBertForTokenClassification [[autodoc]] TFRemBertForTokenClassification - call TFRemBertForQuestionAnswering [[autodoc]] TFRemBertForQuestionAnswering - call
Informer Overview The Informer model was proposed in Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. This method introduces a Probabilistic Attention mechanism to select the "active" queries rather than the "lazy" queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention. The abstract from the paper is the following: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem. This model was contributed by elisim and kashif. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Check out the Informer blog-post in HuggingFace blog: Multivariate Probabilistic Time Series Forecasting with Informer InformerConfig [[autodoc]] InformerConfig InformerModel [[autodoc]] InformerModel - forward InformerForPrediction [[autodoc]] InformerForPrediction - forward
BEiT Overview The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class of an image (as done in the original ViT paper), BEiT models are pre-trained to predict visual tokens from the codebook of OpenAI's DALL-E model given masked patches. The abstract from the paper is the following: We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first "tokenize" the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K, significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains 86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%). Tips: BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They outperform both the original model (ViT) as well as Data-efficient Image Transformers (DeiT) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [BeitImageProcessor] and [ViTForImageClassification] by [BeitForImageClassification]). There's also a demo notebook available which showcases how to combine DALL-E's image tokenizer with BEiT for performing masked image modeling. You can find it here. As the BEiT models expect each image to be of the same size (resolution), one can use [BeitImageProcessor] to resize (or rescale) and normalize images for the model. Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, microsoft/beit-base-patch16-224 refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub. The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the relative position bias among the several self-attention layers. During fine-tuning, each layer's relative position bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to pre-train a model from scratch, one needs to either set the use_relative_position_bias or the use_relative_position_bias attribute of [BeitConfig] to True in order to add position embeddings. BEiT pre-training. Taken from the original paper. This model was contributed by nielsr. The JAX/FLAX version of this model was contributed by kamalkraj. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT. [BeitForImageClassification] is supported by this example script and notebook. See also: Image classification task guide Semantic segmentation - Semantic segmentation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. BEiT specific outputs [[autodoc]] models.beit.modeling_beit.BeitModelOutputWithPooling [[autodoc]] models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling BeitConfig [[autodoc]] BeitConfig BeitFeatureExtractor [[autodoc]] BeitFeatureExtractor - call - post_process_semantic_segmentation BeitImageProcessor [[autodoc]] BeitImageProcessor - preprocess - post_process_semantic_segmentation BeitModel [[autodoc]] BeitModel - forward BeitForMaskedImageModeling [[autodoc]] BeitForMaskedImageModeling - forward BeitForImageClassification [[autodoc]] BeitForImageClassification - forward BeitForSemanticSegmentation [[autodoc]] BeitForSemanticSegmentation - forward FlaxBeitModel [[autodoc]] FlaxBeitModel - call FlaxBeitForMaskedImageModeling [[autodoc]] FlaxBeitForMaskedImageModeling - call FlaxBeitForImageClassification [[autodoc]] FlaxBeitForImageClassification - call
WavLM Overview The WavLM model was proposed in WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. The abstract from the paper is the following: Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. Tips: WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [Wav2Vec2Processor] for the feature extraction. WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks. Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm. This model was contributed by patrickvonplaten. The Authors' code can be found here. Documentation resources Audio classification task guide Automatic speech recognition task guide WavLMConfig [[autodoc]] WavLMConfig WavLMModel [[autodoc]] WavLMModel - forward WavLMForCTC [[autodoc]] WavLMForCTC - forward WavLMForSequenceClassification [[autodoc]] WavLMForSequenceClassification - forward WavLMForAudioFrameClassification [[autodoc]] WavLMForAudioFrameClassification - forward WavLMForXVector [[autodoc]] WavLMForXVector - forward
Reformer DISCLAIMER: This model is still a work in progress, if you see something strange, file a Github Issue. Overview The Reformer model was proposed in the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. The abstract from the paper is the following: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. This model was contributed by patrickvonplaten. The Authors' code can be found here. Tips: Reformer does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035. Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices. Replace traditional attention by LSH (local-sensitive hashing) attention (see below for more details). It’s a technique to avoid computing the full product query-key in the attention layers. Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory). Compute the feedforward operations by chunks and not on the whole batch. Axial Positional Encodings Axial Positional Encodings were first implemented in Google's trax library and developed by the authors of this model's paper. In models that are treating very long input sequences, the conventional position id encodings store an embedings vector of size \(d\) being the config.hidden_size for every position \(i, \ldots, n_s\), with \(n_s\) being config.max_embedding_size. This means that having a sequence length of \(n_s = 2^{19} \approx 0.5M\) and a config.hidden_size of \(d = 2^{10} \approx 1000\) would result in a position encoding matrix: $$X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]$$ which alone has over 500M parameters to store. Axial positional encodings factorize \(X_{i,j}\) into two matrices: $$X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]$$ and $$X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]$$ with: $$d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .$$ Therefore the following holds: $$X_{i,j} = \begin{cases} X^{1}{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \ X^{2}{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor \end{cases}$$ Intuitively, this means that a position embedding vector \(x_j \in \mathbb{R}^{d}\) is now the composition of two factorized embedding vectors: \(x^1_{k, l} + x^2_{l, k}\), where as the config.max_embedding_size dimension \(j\) is factorized into \(k \text{ and } l\). This design ensures that each position embedding vector \(x_j\) is unique. Using the above example again, axial position encoding with \(d^1 = 2^9, d^2 = 2^9, n_s^1 = 2^9, n_s^2 = 2^{10}\) can drastically reduced the number of parameters from 500 000 000 to \(2^{18} + 2^{19} \approx 780 000\) parameters, this means 85% less memory usage. In practice, the parameter config.axial_pos_embds_dim is set to a tuple \((d^1, d^2)\) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to a tuple \((n_s^1, n_s^2)\) which product has to be equal to config.max_embedding_size, which during training has to be equal to the sequence length of the input_ids. LSH Self Attention In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in Practical and Optimal LSH for Angular Distance to assign each of the tied key query embedding vectors to one of config.num_buckets possible buckets. The premise is that the more "similar" key query embedding vectors (in terms of cosine similarity) are to each other, the more likely they are assigned to the same bucket. The accuracy of the LSH mechanism can be improved by increasing config.num_hashes or directly the argument num_hashes of the forward function so that the output of the LSH self attention better approximates the output of the "normal" full self attention. The buckets are then sorted and chunked into query key embedding vector chunks each of length config.lsh_chunk_length. For each chunk, the query embedding vectors attend to its key vectors (which are tied to themselves) and to the key embedding vectors of config.lsh_num_chunks_before previous neighboring chunks and config.lsh_num_chunks_after following neighboring chunks. For more information, see the original Paper or this great blog post. Note that config.num_buckets can also be factorized into a list \((n_{\text{buckets}}^1, n_{\text{buckets}}^2)\). This way instead of assigning the query key embedding vectors to one of \((1,\ldots, n_{\text{buckets}})\) they are assigned to one of \((1-1,\ldots, n_{\text{buckets}}^1-1, \ldots, 1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)\). This is crucial for very long sequences to save memory. When training a model from scratch, it is recommended to leave config.num_buckets=None, so that depending on the sequence length a good value for num_buckets is calculated on the fly. This value will then automatically be saved in the config and should be reused for inference. Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory and time bottleneck in a transformer model, with \(n_s\) being the sequence length. Local Self Attention Local self attention is essentially a "normal" self attention layer with key, query and value projections, but is chunked so that in each chunk of length config.local_chunk_length the query embedding vectors only attends to the key embedding vectors in its chunk and to the key embedding vectors of config.local_num_chunks_before previous neighboring chunks and config.local_num_chunks_after following neighboring chunks. Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory and time bottleneck in a transformer model, with \(n_s\) being the sequence length. Training During training, we must ensure that the sequence length is set to a value that can be divided by the least common multiple of config.lsh_chunk_length and config.local_chunk_length and that the parameters of the Axial Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can easily be trained on sequences as long as 64000 tokens. For training, the [ReformerModelWithLMHead] should be used as follows: python input_ids = tokenizer.encode("This is a sentence from the training data", return_tensors="pt") loss = model(input_ids, labels=input_ids)[0] Documentation resources Text classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide ReformerConfig [[autodoc]] ReformerConfig ReformerTokenizer [[autodoc]] ReformerTokenizer - save_vocabulary ReformerTokenizerFast [[autodoc]] ReformerTokenizerFast ReformerModel [[autodoc]] ReformerModel - forward ReformerModelWithLMHead [[autodoc]] ReformerModelWithLMHead - forward ReformerForMaskedLM [[autodoc]] ReformerForMaskedLM - forward ReformerForSequenceClassification [[autodoc]] ReformerForSequenceClassification - forward ReformerForQuestionAnswering [[autodoc]] ReformerForQuestionAnswering - forward
DeBERTa Overview The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa. This model was contributed by DeBERTa. This model TF 2.0 implementation was contributed by kamalkraj . The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on how to Accelerate Large Model Training using DeepSpeed with DeBERTa. A blog post on Supercharged Customer Service with Machine Learning with DeBERTa. [DebertaForSequenceClassification] is supported by this example script and notebook. [TFDebertaForSequenceClassification] is supported by this example script and notebook. Text classification task guide [DebertaForTokenClassification] is supported by this example script and notebook. [TFDebertaForTokenClassification] is supported by this example script and notebook. Token classification chapter of the 🤗 Hugging Face Course. Byte-Pair Encoding tokenization chapter of the 🤗 Hugging Face Course. Token classification task guide [DebertaForMaskedLM] is supported by this example script and notebook. [TFDebertaForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide [DebertaForQuestionAnswering] is supported by this example script and notebook. [TFDebertaForQuestionAnswering] is supported by this example script and notebook. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide DebertaConfig [[autodoc]] DebertaConfig DebertaTokenizer [[autodoc]] DebertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary DebertaTokenizerFast [[autodoc]] DebertaTokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences DebertaModel [[autodoc]] DebertaModel - forward DebertaPreTrainedModel [[autodoc]] DebertaPreTrainedModel DebertaForMaskedLM [[autodoc]] DebertaForMaskedLM - forward DebertaForSequenceClassification [[autodoc]] DebertaForSequenceClassification - forward DebertaForTokenClassification [[autodoc]] DebertaForTokenClassification - forward DebertaForQuestionAnswering [[autodoc]] DebertaForQuestionAnswering - forward TFDebertaModel [[autodoc]] TFDebertaModel - call TFDebertaPreTrainedModel [[autodoc]] TFDebertaPreTrainedModel - call TFDebertaForMaskedLM [[autodoc]] TFDebertaForMaskedLM - call TFDebertaForSequenceClassification [[autodoc]] TFDebertaForSequenceClassification - call TFDebertaForTokenClassification [[autodoc]] TFDebertaForTokenClassification - call TFDebertaForQuestionAnswering [[autodoc]] TFDebertaForQuestionAnswering - call
MobileNet V2 Overview The MobileNet model was proposed in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. The abstract from the paper is the following: In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters. Tips: The checkpoints are named mobilenet_v2_depth_size, for example mobilenet_v2_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and 224 is the resolution of the input images the model was trained on. Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. One can use [MobileNetV2ImageProcessor] to prepare images for the model. The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). The segmentation model uses a DeepLabV3+ head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC. The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [MobileNetV2Config] with tf_padding = False. Unsupported features: The [MobileNetV2Model] outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this. The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights. It's common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers. The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell [MobileNetV2Model] up to which layer it should run. This model was contributed by matthijs. The original code and weights can be found here for the main model and here for DeepLabV3+. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV2. [MobileNetV2ForImageClassification] is supported by this example script and notebook. See also: Image classification task guide Semantic segmentation - Semantic segmentation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. MobileNetV2Config [[autodoc]] MobileNetV2Config MobileNetV2FeatureExtractor [[autodoc]] MobileNetV2FeatureExtractor - preprocess - post_process_semantic_segmentation MobileNetV2ImageProcessor [[autodoc]] MobileNetV2ImageProcessor - preprocess - post_process_semantic_segmentation MobileNetV2Model [[autodoc]] MobileNetV2Model - forward MobileNetV2ForImageClassification [[autodoc]] MobileNetV2ForImageClassification - forward MobileNetV2ForSemanticSegmentation [[autodoc]] MobileNetV2ForSemanticSegmentation - forward
RegNet Overview The RegNet model was proposed in Designing Network Design Spaces by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. The abstract from the paper is the following: In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs. Tips: One can use [AutoImageProcessor] to prepare images for the model. The huge 10B model from Self-supervised Pretraining of Visual Features in the Wild, trained on one billion Instagram images, is available on the hub This model was contributed by Francesco. The TensorFlow version of the model was contributed by sayakpaul and ariG23498. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RegNet. [RegNetForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. RegNetConfig [[autodoc]] RegNetConfig RegNetModel [[autodoc]] RegNetModel - forward RegNetForImageClassification [[autodoc]] RegNetForImageClassification - forward TFRegNetModel [[autodoc]] TFRegNetModel - call TFRegNetForImageClassification [[autodoc]] TFRegNetForImageClassification - call FlaxRegNetModel [[autodoc]] FlaxRegNetModel - call FlaxRegNetForImageClassification [[autodoc]] FlaxRegNetForImageClassification - call
MobileBERT Overview The MobileBERT model was proposed in MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several approaches. The abstract from the paper is the following: Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE). Tips: MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. This model was contributed by vshampor. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide MobileBertConfig [[autodoc]] MobileBertConfig MobileBertTokenizer [[autodoc]] MobileBertTokenizer MobileBertTokenizerFast [[autodoc]] MobileBertTokenizerFast MobileBert specific outputs [[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput [[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput MobileBertModel [[autodoc]] MobileBertModel - forward MobileBertForPreTraining [[autodoc]] MobileBertForPreTraining - forward MobileBertForMaskedLM [[autodoc]] MobileBertForMaskedLM - forward MobileBertForNextSentencePrediction [[autodoc]] MobileBertForNextSentencePrediction - forward MobileBertForSequenceClassification [[autodoc]] MobileBertForSequenceClassification - forward MobileBertForMultipleChoice [[autodoc]] MobileBertForMultipleChoice - forward MobileBertForTokenClassification [[autodoc]] MobileBertForTokenClassification - forward MobileBertForQuestionAnswering [[autodoc]] MobileBertForQuestionAnswering - forward TFMobileBertModel [[autodoc]] TFMobileBertModel - call TFMobileBertForPreTraining [[autodoc]] TFMobileBertForPreTraining - call TFMobileBertForMaskedLM [[autodoc]] TFMobileBertForMaskedLM - call TFMobileBertForNextSentencePrediction [[autodoc]] TFMobileBertForNextSentencePrediction - call TFMobileBertForSequenceClassification [[autodoc]] TFMobileBertForSequenceClassification - call TFMobileBertForMultipleChoice [[autodoc]] TFMobileBertForMultipleChoice - call TFMobileBertForTokenClassification [[autodoc]] TFMobileBertForTokenClassification - call TFMobileBertForQuestionAnswering [[autodoc]] TFMobileBertForQuestionAnswering - call
DETR Overview The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs. The abstract from the paper is the following: We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. This model was contributed by nielsr. The original code can be found here. Here's a TLDR explaining how [~transformers.DetrForObjectDetection] works: First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use ResNet-50/ResNet-101). Let's assume we also add a batch dimension. This means that the input to the backbone is a tensor of shape (batch_size, 3, height, width), assuming the image has 3 color channels (RGB). The CNN backbone outputs a new lower-resolution feature map, typically of shape (batch_size, 2048, height/32, width/32). This is then projected to match the hidden dimension of the Transformer of DETR, which is 256 by default, using a nn.Conv2D layer. So now, we have a tensor of shape (batch_size, 256, height/32, width/32). Next, the feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) = (batch_size, width/32*height/32, 256). So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller d_model (which in NLP is typically 768 or higher). Next, this is sent through the encoder, outputting encoder_hidden_states of the same shape (you can consider these as image features). Next, so-called object queries are sent through the decoder. This is a tensor of shape (batch_size, num_queries, d_model), with num_queries typically set to 100 and initialized with zeros. These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to the encoder, they are added to the input of each attention layer. Each object query will look for a particular object in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no object", and a MLP to predict bounding boxes for each query. The model is trained using a bipartite matching loss: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). [~transformers.DetrForSegmentation] adds a segmentation mask head on top of [~transformers.DetrForObjectDetection]. The mask head can be trained either jointly, or in a two steps process, where one first trains a [~transformers.DetrForObjectDetection] model to detect bounding boxes around both "things" (instances) and "stuff" (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. Tips: DETR uses so-called object queries to detect objects in an image. The number of queries determines the maximum number of objects that can be detected in a single image, and is set to 100 by default (see parameter num_queries of [~transformers.DetrConfig]). Note that it's good to have some slack (in COCO, the authors used 100, while the maximum number of objects in a COCO image is ~70). The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2, which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used. DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned absolute position embeddings. By default, the parameter position_embedding_type of [~transformers.DetrConfig] is set to "sine". During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter auxiliary_loss of [~transformers.DetrConfig] to True, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). If you want to train the model in a distributed environment across multiple nodes, then one should update the num_boxes variable in the DetrLoss class of modeling_detr.py. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation here. [~transformers.DetrForObjectDetection] and [~transformers.DetrForSegmentation] can be initialized with any convolutional backbone available in the timm library. Initializing with a MobileNet backbone for example can be done by setting the backbone attribute of [~transformers.DetrConfig] to "tf_mobilenetv3_small_075", and then initializing the model with that config. DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use [~transformers.DetrImageProcessor] to prepare images (and optional annotations in COCO format) for the model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding. Alternatively, one can also define a custom collate_fn in order to batch images together, using [~transformers.DetrImageProcessor.pad_and_create_pixel_mask]. The size of the images will determine the amount of memory being used, and will thus determine the batch_size. It is advised to use a batch size of 2 per GPU. See this Github thread for more info. There are three ways to instantiate a DETR model (depending on what you prefer): Option 1: Instantiate DETR with pre-trained weights for entire model from transformers import DetrForObjectDetection model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50") Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone from transformers import DetrConfig, DetrForObjectDetection config = DetrConfig() model = DetrForObjectDetection(config) Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformerpy config = DetrConfig(use_pretrained_backbone=False) model = DetrForObjectDetection(config) As a summary, consider the following table: | Task | Object detection | Instance segmentation | Panoptic segmentation | |------|------------------|-----------------------|-----------------------| | Description | Predicting bounding boxes and class labels around objects in an image | Predicting masks around objects (i.e. instances) in an image | Predicting masks around both objects (i.e. instances) as well as "stuff" (i.e. background things like trees and roads) in an image | | Model | [~transformers.DetrForObjectDetection] | [~transformers.DetrForSegmentation] | [~transformers.DetrForSegmentation] | | Example dataset | COCO detection | COCO detection, COCO panoptic | COCO panoptic | | | Format of annotations to provide to [~transformers.DetrImageProcessor] | {'image_id': int, 'annotations': List[Dict]} each Dict being a COCO object annotation | {'image_id': int, 'annotations': List[Dict]} (in case of COCO detection) or {'file_name': str, 'image_id': int, 'segments_info': List[Dict]} (in case of COCO panoptic) | {'file_name': str, 'image_id': int, 'segments_info': List[Dict]} and masks_path (path to directory containing PNG files of the masks) | | Postprocessing (i.e. converting the output of the model to COCO API) | [~transformers.DetrImageProcessor.post_process] | [~transformers.DetrImageProcessor.post_process_segmentation] | [~transformers.DetrImageProcessor.post_process_segmentation], [~transformers.DetrImageProcessor.post_process_panoptic] | | evaluators | CocoEvaluator with iou_types="bbox" | CocoEvaluator with iou_types="bbox" or "segm" | CocoEvaluator with iou_tupes="bbox" or "segm", PanopticEvaluator | In short, one should prepare the data either in COCO detection or COCO panoptic format, then use [~transformers.DetrImageProcessor] to create pixel_values, pixel_mask and optional labels, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the outputs of the model using one of the postprocessing methods of [~transformers.DetrImageProcessor]. These can be be provided to either CocoEvaluator or PanopticEvaluator, which allow you to calculate metrics like mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the original repository. See the example notebooks for more info regarding evaluation. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR. All example notebooks illustrating fine-tuning [DetrForObjectDetection] and [DetrForSegmentation] on a custom dataset an be found here. See also: Object detection task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DETR specific outputs [[autodoc]] models.detr.modeling_detr.DetrModelOutput [[autodoc]] models.detr.modeling_detr.DetrObjectDetectionOutput [[autodoc]] models.detr.modeling_detr.DetrSegmentationOutput DetrConfig [[autodoc]] DetrConfig DetrImageProcessor [[autodoc]] DetrImageProcessor - preprocess - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation DetrFeatureExtractor [[autodoc]] DetrFeatureExtractor - call - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation DetrModel [[autodoc]] DetrModel - forward DetrForObjectDetection [[autodoc]] DetrForObjectDetection - forward DetrForSegmentation [[autodoc]] DetrForSegmentation - forward
NLLB-MOE Overview The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Tips: M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE The NLLB-MoE is very similar to the NLLB model, but it's feed forward layer is based on the implementation of SwitchTransformers. The tokenizer is the same as the NLLB models. This model was contributed by Arthur Zucker. The original code can be found here. Implementation differences with SwitchTransformers The biggest difference is the way the tokens are routed. NLLB-MoE uses a top-2-gate which means that for each input, only the top two experts are selected based on the highest predicted probabilities from the gating network, and the remaining experts are ignored. In SwitchTransformers, only the top-1 probabilities are computed, which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, SwitchTransformers still adds its unmodified hidden states (kind of like a residual connection) while they are masked in NLLB's top-2 routing mechanism. Generating with NLLB-MoE The avalable checkpoints requires around 350GB of storage. Make sure to use accelerate if you do not have enough RAM on your machine. While generating the target text set the forced_bos_token_id to the target language id. The following example shows how to translate English to French using the facebook/nllb-200-distilled-600M model. Note that we're using the BCP-47 code for French fra_Latn. See here for the list of all BCP-47 in the Flores 200 dataset. thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") article = "Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage." inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=50 ) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la société avait commencé lorsque sa sonnette n'était pas audible depuis son magasin dans son garage." Generating from any other language than English English (eng_Latn) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b", src_lang="ron_Latn") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") article = "Şeful ONU spune că nu există o soluţie militară în Siria" inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] Documentation resources Translation task guide Summarization task guide NllbMoeConfig [[autodoc]] NllbMoeConfig NllbMoeTop2Router [[autodoc]] NllbMoeTop2Router - route_tokens - forward NllbMoeSparseMLP [[autodoc]] NllbMoeSparseMLP - forward NllbMoeModel [[autodoc]] NllbMoeModel - forward NllbMoeForConditionalGeneration [[autodoc]] NllbMoeForConditionalGeneration - forward
Trajectory Transformer This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: pip install -U transformers==4.30.0. Overview The Trajectory Transformer model was proposed in Offline Reinforcement Learning as One Big Sequence Modeling Problem by Michael Janner, Qiyang Li, Sergey Levine. The abstract from the paper is the following: Reinforcement learning (RL) is typically concerned with estimating stationary policies or single-step models, leveraging the Markov property to factorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether high-capacity sequence prediction models that work well in other domains, such as natural-language processing, can also provide effective solutions to the RL problem. To this end, we explore how RL can be tackled with the tools of sequence modeling, using a Transformer architecture to model distributions over trajectories and repurposing beam search as a planning algorithm. Framing RL as sequence modeling problem simplifies a range of design decisions, allowing us to dispense with many of the components common in offline RL algorithms. We demonstrate the flexibility of this approach across long-horizon dynamics prediction, imitation learning, goal-conditioned RL, and offline RL. Further, we show that this approach can be combined with existing model-free algorithms to yield a state-of-the-art planner in sparse-reward, long-horizon tasks. Tips: This Transformer is used for deep reinforcement learning. To use it, you need to create sequences from actions, states and rewards from all previous timesteps. This model will treat all these elements together as one big sequence (a trajectory). This model was contributed by CarlCochet. The original code can be found here. TrajectoryTransformerConfig [[autodoc]] TrajectoryTransformerConfig TrajectoryTransformerModel [[autodoc]] TrajectoryTransformerModel - forward
GPT Neo Overview The GPTNeo model was released in the EleutherAI/gpt-neo repository by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the Pile dataset. The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of 256 tokens. This model was contributed by valhalla. Generation The generate() method can be used to generate text using GPT Neo model. thon from transformers import GPTNeoForCausalLM, GPT2Tokenizer model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") prompt = ( "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English." ) input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] Documentation resources Text classification task guide Causal language modeling task guide GPTNeoConfig [[autodoc]] GPTNeoConfig GPTNeoModel [[autodoc]] GPTNeoModel - forward GPTNeoForCausalLM [[autodoc]] GPTNeoForCausalLM - forward GPTNeoForQuestionAnswering [[autodoc]] GPTNeoForQuestionAnswering - forward GPTNeoForSequenceClassification [[autodoc]] GPTNeoForSequenceClassification - forward GPTNeoForTokenClassification [[autodoc]] GPTNeoForTokenClassification - forward FlaxGPTNeoModel [[autodoc]] FlaxGPTNeoModel - call FlaxGPTNeoForCausalLM [[autodoc]] FlaxGPTNeoForCausalLM - call
T5v1.1 Overview T5v1.1 was released in the google-research/text-to-text-transfer-transformer repository by Colin Raffel et al. It's an improved version of the original T5 model. One can directly plug in the weights of T5v1.1 into a T5 model, like so: thon from transformers import T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("google/t5-v1_1-base") T5 Version 1.1 includes the following improvements compared to the original T5 model: GEGLU activation in the feed-forward hidden layer, rather than ReLU. See this paper. Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. Pre-trained on C4 only without mixing in the downstream tasks. No parameter sharing between the embedding and classifier layer. "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d_model and smaller num_heads and d_ff. Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since t5v1.1 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: google/t5-v1_1-small google/t5-v1_1-base google/t5-v1_1-large google/t5-v1_1-xl google/t5-v1_1-xxl. One can refer to T5's documentation page for all tips, code examples and notebooks. This model was contributed by patrickvonplaten. The original code can be found here.
LayoutLMv3 Overview The LayoutLMv3 model was proposed in LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. LayoutLMv3 simplifies LayoutLMv2 by using patch embeddings (as in ViT) instead of leveraging a CNN backbone, and pre-trains the model on 3 objectives: masked language modeling (MLM), masked image modeling (MIM) and word-patch alignment (WPA). The abstract from the paper is the following: Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. Tips: In terms of data processing, LayoutLMv3 is identical to its predecessor LayoutLMv2, except that: images need to be resized and normalized with channels in regular RGB format. LayoutLMv2 on the other hand normalizes the images internally and expects the channels in BGR format. text is tokenized using byte-pair encoding (BPE), as opposed to WordPiece. Due to these differences in data preprocessing, one can use [LayoutLMv3Processor] which internally combines a [LayoutLMv3ImageProcessor] (for the image modality) and a [LayoutLMv3Tokenizer]/[LayoutLMv3TokenizerFast] (for the text modality) to prepare all data for the model. Regarding usage of [LayoutLMv3Processor], we refer to the usage guide of its predecessor. Demo notebooks for LayoutLMv3 can be found here. Demo scripts can be found here. LayoutLMv3 architecture. Taken from the original paper. This model was contributed by nielsr. The TensorFlow version of this model was added by chriskoo, tokec, and lre. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv3. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. LayoutLMv3 is nearly identical to LayoutLMv2, so we've also included LayoutLMv2 resources you can adapt for LayoutLMv3 tasks. For these notebooks, take care to use [LayoutLMv2Processor] instead when preparing data for the model! [LayoutLMv2ForSequenceClassification] is supported by this notebook. Text classification task guide [LayoutLMv3ForTokenClassification] is supported by this example script and notebook. A notebook for how to perform inference with [LayoutLMv2ForTokenClassification] and a notebook for how to perform inference when no labels are available with [LayoutLMv2ForTokenClassification]. A notebook for how to finetune [LayoutLMv2ForTokenClassification] with the 🤗 Trainer. Token classification task guide [LayoutLMv2ForQuestionAnswering] is supported by this notebook. Question answering task guide Document question answering - Document question answering task guide LayoutLMv3Config [[autodoc]] LayoutLMv3Config LayoutLMv3FeatureExtractor [[autodoc]] LayoutLMv3FeatureExtractor - call LayoutLMv3ImageProcessor [[autodoc]] LayoutLMv3ImageProcessor - preprocess LayoutLMv3Tokenizer [[autodoc]] LayoutLMv3Tokenizer - call - save_vocabulary LayoutLMv3TokenizerFast [[autodoc]] LayoutLMv3TokenizerFast - call LayoutLMv3Processor [[autodoc]] LayoutLMv3Processor - call LayoutLMv3Model [[autodoc]] LayoutLMv3Model - forward LayoutLMv3ForSequenceClassification [[autodoc]] LayoutLMv3ForSequenceClassification - forward LayoutLMv3ForTokenClassification [[autodoc]] LayoutLMv3ForTokenClassification - forward LayoutLMv3ForQuestionAnswering [[autodoc]] LayoutLMv3ForQuestionAnswering - forward TFLayoutLMv3Model [[autodoc]] TFLayoutLMv3Model - call TFLayoutLMv3ForSequenceClassification [[autodoc]] TFLayoutLMv3ForSequenceClassification - call TFLayoutLMv3ForTokenClassification [[autodoc]] TFLayoutLMv3ForTokenClassification - call TFLayoutLMv3ForQuestionAnswering [[autodoc]] TFLayoutLMv3ForQuestionAnswering - call
LXMERT Overview The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from Transformers by Hao Tan & Mohit Bansal. It is a series of bidirectional transformer encoders (one for the vision modality, one for the language modality, and then one to fuse both modalities) pretrained using a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. The pretraining consists of multiple multi-modal datasets: MSCOCO, Visual-Genome + Visual-Genome Question Answering, VQA 2.0, and GQA. The abstract from the paper is the following: Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR, and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders Tips: Bounding boxes are not necessary to be used in the visual feature embeddings, any kind of visual-spacial features will work. Both the language hidden states and the visual hidden states that LXMERT outputs are passed through the cross-modality layer, so they contain information from both modalities. To access a modality that only attends to itself, select the vision/language hidden states from the first input in the tuple. The bidirectional cross-modality encoder attention only returns attention values when the language modality is used as the input and the vision modality is used as the context vector. Further, while the cross-modality encoder contains self-attention for each respective modality and cross-attention, only the cross attention is returned and both self attention outputs are disregarded. This model was contributed by eltoto1219. The original code can be found here. Documentation resources Question answering task guide LxmertConfig [[autodoc]] LxmertConfig LxmertTokenizer [[autodoc]] LxmertTokenizer LxmertTokenizerFast [[autodoc]] LxmertTokenizerFast Lxmert specific outputs [[autodoc]] models.lxmert.modeling_lxmert.LxmertModelOutput [[autodoc]] models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput [[autodoc]] models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput LxmertModel [[autodoc]] LxmertModel - forward LxmertForPreTraining [[autodoc]] LxmertForPreTraining - forward LxmertForQuestionAnswering [[autodoc]] LxmertForQuestionAnswering - forward TFLxmertModel [[autodoc]] TFLxmertModel - call TFLxmertForPreTraining [[autodoc]] TFLxmertForPreTraining - call
LayoutLMV2 Overview The LayoutLMV2 model was proposed in LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves LayoutLM to obtain state-of-the-art results across several document image understanding benchmarks: information extraction from scanned documents: the FUNSD dataset (a collection of 199 annotated forms comprising more than 30,000 words), the CORD dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the SROIE dataset (a collection of 626 receipts for training and 347 receipts for testing) and the Kleister-NDA dataset (a collection of non-disclosure agreements from the EDGAR database, including 254 documents for training, 83 documents for validation, and 203 documents for testing). document image classification: the RVL-CDIP dataset (a collection of 400,000 images belonging to one of 16 classes). document visual question answering: the DocVQA dataset (a collection of 50,000 questions defined on 12,000+ document images). The abstract from the paper is the following: Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this paper, we present LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture, so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852), RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at this https URL. LayoutLMv2 depends on detectron2, torchvision and tesseract. Run the following to install them: python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' python -m pip install torchvision tesseract (If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.) Tips: The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning). LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in the self-attention layers. Details can be found on page 5 of the paper. Demo notebooks on how to use the LayoutLMv2 model on RVL-CDIP, FUNSD, DocVQA, CORD can be found here. LayoutLMv2 uses Facebook AI's Detectron2 package for its visual backbone. See this link for installation instructions. In addition to input_ids, [~LayoutLMv2Model.forward] expects 2 additional inputs, namely image and bbox. The image input corresponds to the original document image in which the text tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of document images, image should be a tensor of shape (batch_size, 3, 224, 224). This can be either a torch.Tensor or a Detectron2.structures.ImageList. You don't need to normalize the channels, as this is done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models in Detectron2 are pre-trained using the BGR format. The bbox input are the bounding boxes (i.e. 2D-positions) of the input text tokens. This is identical to [LayoutLMModel]. These can be obtained using an external OCR engine such as Google's Tesseract (there's a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000 scale. To normalize, you can use the following function: python def normalize_bbox(bbox, width, height): return [ int(1000 * (bbox[0] / width)), int(1000 * (bbox[1] / height)), int(1000 * (bbox[2] / width)), int(1000 * (bbox[3] / height)), ] Here, width and height correspond to the width and height of the original document in which the token occurs (before resizing the image). Those can be obtained using the Python Image Library (PIL) library for example, as follows: thon from PIL import Image image = Image.open( "name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)." ) width, height = image.size However, this model includes a brand new [~transformers.LayoutLMv2Processor] which can be used to directly prepare data for the model (including applying OCR under the hood). More information can be found in the "Usage" section below. Internally, [~transformers.LayoutLMv2Model] will send the image input through its visual backbone to obtain a lower-resolution feature map, whose shape is equal to the image_feature_pool_shape attribute of [~transformers.LayoutLMv2Config]. This feature map is then flattened to obtain a sequence of image tokens. As the size of the feature map is 7x7 by default, one obtains 49 image tokens. These are then concatenated with the text tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a length of 512 + 49 = 561, if you pad the text tokens up to the max length. More generally, the last hidden states will have a shape of seq_length + image_feature_pool_shape[0] * config.image_feature_pool_shape[1]. When calling [~transformers.LayoutLMv2Model.from_pretrained], a warning will be printed with a long list of parameter names that are not initialized. This is not a problem, as these parameters are batch normalization statistics, which are going to have values when fine-tuning on a custom dataset. If you want to train the model in a distributed environment, make sure to call [synchronize_batch_norm] on the model in order to properly synchronize the batch normalization layers of the visual backbone. In addition, there's LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on LayoutXLM's documentation page. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A notebook on how to finetune LayoutLMv2 for text-classification on RVL-CDIP dataset. See also: Text classification task guide A notebook on how to finetune LayoutLMv2 for question-answering on DocVQA dataset. See also: Question answering task guide See also: Document question answering task guide A notebook on how to finetune LayoutLMv2 for token-classification on CORD dataset. A notebook on how to finetune LayoutLMv2 for token-classification on FUNSD dataset. See also: Token classification task guide Usage: LayoutLMv2Processor The easiest way to prepare data for the model is to use [LayoutLMv2Processor], which internally combines a image processor ([LayoutLMv2ImageProcessor]) and a tokenizer ([LayoutLMv2Tokenizer] or [LayoutLMv2TokenizerFast]). The image processor handles the image modality, while the tokenizer handles the text modality. A processor combines both, which is ideal for a multi-modal model like LayoutLMv2. Note that you can still use both separately, if you only want to handle one modality. thon from transformers import LayoutLMv2ImageProcessor, LayoutLMv2TokenizerFast, LayoutLMv2Processor image_processor = LayoutLMv2ImageProcessor() # apply_ocr is set to True by default tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased") processor = LayoutLMv2Processor(image_processor, tokenizer) In short, one can provide a document image (and possibly additional data) to [LayoutLMv2Processor], and it will create the inputs expected by the model. Internally, the processor first uses [LayoutLMv2ImageProcessor] to apply OCR on the image to get a list of words and normalized bounding boxes, as well to resize the image to a given size in order to get the image input. The words and normalized bounding boxes are then provided to [LayoutLMv2Tokenizer] or [LayoutLMv2TokenizerFast], which converts them to token-level input_ids, attention_mask, token_type_ids, bbox. Optionally, one can provide word labels to the processor, which are turned into token-level labels. [LayoutLMv2Processor] uses PyTesseract, a Python wrapper around Google's Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of choice, and provide the words and normalized boxes yourself. This requires initializing [LayoutLMv2ImageProcessor] with apply_ocr set to False. In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs). Use case 1: document image classification (training, inference) + token classification (inference), apply_ocr = True This is the simplest case, in which the processor (actually the image processor) will perform OCR on the image to get the words and normalized bounding boxes. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased") image = Image.open( "name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)." ).convert("RGB") encoding = processor( image, return_tensors="pt" ) # you can also add all tokenizer parameters here such as padding, truncation print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False In case one wants to do OCR themselves, one can initialize the image processor with apply_ocr set to False. In that case, one should provide the words and corresponding (normalized) bounding boxes themselves to the processor. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") image = Image.open( "name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)." ).convert("RGB") words = ["hello", "world"] boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes encoding = processor(image, words, boxes=boxes, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) Use case 3: token classification (training), apply_ocr=False For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word labels in order to train a model. The processor will then convert these into token-level labels. By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the ignore_index of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can initialize the tokenizer with only_label_first_subword set to False. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") image = Image.open( "name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)." ).convert("RGB") words = ["hello", "world"] boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes word_labels = [1, 2] encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image']) Use case 4: visual question answering (inference), apply_ocr=True For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP]. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased") image = Image.open( "name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)." ).convert("RGB") question = "What's his name?" encoding = processor(image, question, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) Use case 5: visual question answering (inference), apply_ocr=False For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor. thon from transformers import LayoutLMv2Processor from PIL import Image processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") image = Image.open( "name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)." ).convert("RGB") question = "What's his name?" words = ["hello", "world"] boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes encoding = processor(image, question, words, boxes=boxes, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) LayoutLMv2Config [[autodoc]] LayoutLMv2Config LayoutLMv2FeatureExtractor [[autodoc]] LayoutLMv2FeatureExtractor - call LayoutLMv2ImageProcessor [[autodoc]] LayoutLMv2ImageProcessor - preprocess LayoutLMv2Tokenizer [[autodoc]] LayoutLMv2Tokenizer - call - save_vocabulary LayoutLMv2TokenizerFast [[autodoc]] LayoutLMv2TokenizerFast - call LayoutLMv2Processor [[autodoc]] LayoutLMv2Processor - call LayoutLMv2Model [[autodoc]] LayoutLMv2Model - forward LayoutLMv2ForSequenceClassification [[autodoc]] LayoutLMv2ForSequenceClassification LayoutLMv2ForTokenClassification [[autodoc]] LayoutLMv2ForTokenClassification LayoutLMv2ForQuestionAnswering [[autodoc]] LayoutLMv2ForQuestionAnswering
GIT Overview The GIT model was proposed in GIT: A Generative Image-to-text Transformer for Vision and Language by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. GIT is a decoder-only Transformer that leverages CLIP's vision encoder to condition the model on vision inputs besides text. The model obtains state-of-the-art results on image captioning and visual question answering benchmarks. The abstract from the paper is the following: In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks. Tips: GIT is implemented in a very similar way to GPT-2, the only difference being that the model is also conditioned on pixel_values. One can use [GitProcessor] to prepare images for the model, and the generate method for autoregressive generation. GIT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT. Demo notebooks regarding inference + fine-tuning GIT on custom data can be found here. See also: Causal language modeling task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. GitVisionConfig [[autodoc]] GitVisionConfig GitVisionModel [[autodoc]] GitVisionModel - forward GitConfig [[autodoc]] GitConfig - all GitProcessor [[autodoc]] GitProcessor - call GitModel [[autodoc]] GitModel - forward GitForCausalLM [[autodoc]] GitForCausalLM - forward
FNet Overview The FNet model was proposed in FNet: Mixing Tokens with Fourier Transforms by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT model with a fourier transform which returns only the real parts of the transform. The model is significantly faster than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97% accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the paper is the following: We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts. Tips on usage: The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum sequence length for fine-tuning and inference. This model was contributed by gchhablani. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide FNetConfig [[autodoc]] FNetConfig FNetTokenizer [[autodoc]] FNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary FNetTokenizerFast [[autodoc]] FNetTokenizerFast FNetModel [[autodoc]] FNetModel - forward FNetForPreTraining [[autodoc]] FNetForPreTraining - forward FNetForMaskedLM [[autodoc]] FNetForMaskedLM - forward FNetForNextSentencePrediction [[autodoc]] FNetForNextSentencePrediction - forward FNetForSequenceClassification [[autodoc]] FNetForSequenceClassification - forward FNetForMultipleChoice [[autodoc]] FNetForMultipleChoice - forward FNetForTokenClassification [[autodoc]] FNetForTokenClassification - forward FNetForQuestionAnswering [[autodoc]] FNetForQuestionAnswering - forward
ByT5 Overview The ByT5 model was presented in ByT5: Towards a token-free future with pre-trained byte-to-byte models by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. The abstract from the paper is the following: Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. This model was contributed by patrickvonplaten. The original code can be found here. ByT5's architecture is based on the T5v1.1 model, so one can refer to T5v1.1's documentation page. They only differ in how inputs should be prepared for the model, see the code examples below. Since ByT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Example ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer: thon from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") num_special_tokens = 3 Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. => Need to shift utf-8 character encodings by 3 before passing ids to model. input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens loss = model(input_ids, labels=labels).loss loss.item() 2.66 For batched inference and training it is however recommended to make use of the tokenizer: thon from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") model_inputs = tokenizer( ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ) labels_dict = tokenizer( ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ) labels = labels_dict.input_ids loss = model(**model_inputs, labels=labels).loss loss.item() 17.9 Similar to T5, ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let's corrupt some characters of the input sentence "The dog chases a ball in the park." and ask ByT5 to predict them for us. thon from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") input_ids_prompt = "The dog chases a ball in the park." input_ids = tokenizer(input_ids_prompt).input_ids Note that we cannot add "{extra_id_}" to the string directly as the Byte tokenizer would incorrectly merge the tokens For ByT5, we need to work directly on the character level Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead uses final utf character ids. UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. => mask to "The dog [258]a ball [257]park." input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) ByT5 produces only one char at a time so we need to produce many more output characters here -> set max_length=100. output_ids = model.generate(input_ids, max_length=100)[0].tolist() output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] ^- Note how 258 descends to 257, 256, 255 Now we need to split on the sentinel tokens, let's write a short loop for this output_ids_list = [] start_token = 0 sentinel_token = 258 while sentinel_token in output_ids: split_idx = output_ids.index(sentinel_token) output_ids_list.append(output_ids[start_token:split_idx]) start_token = split_idx sentinel_token -= 1 output_ids_list.append(output_ids[start_token:]) output_string = tokenizer.batch_decode(output_ids_list) output_string ['', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ByT5Tokenizer [[autodoc]] ByT5Tokenizer See [ByT5Tokenizer] for all details.
ResNet Overview The ResNet model was proposed in Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by Nvidia, we apply the stride=2 for downsampling in bottleneck's 3x3 conv and not in the first 1x1. This is generally known as "ResNet v1.5". ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision. The abstract from the paper is the following: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. Tips: One can use [AutoImageProcessor] to prepare images for the model. The figure below illustrates the architecture of ResNet. Taken from the original paper. This model was contributed by Francesco. The TensorFlow version of this model was added by amyeroberts. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet. [ResNetForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ResNetConfig [[autodoc]] ResNetConfig ResNetModel [[autodoc]] ResNetModel - forward ResNetForImageClassification [[autodoc]] ResNetForImageClassification - forward TFResNetModel [[autodoc]] TFResNetModel - call TFResNetForImageClassification [[autodoc]] TFResNetForImageClassification - call FlaxResNetModel [[autodoc]] FlaxResNetModel - call FlaxResNetForImageClassification [[autodoc]] FlaxResNetForImageClassification - call
FLAN-UL2 Overview Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. Similiar to Flan-T5, one can directly use FLAN-UL2 weights without finetuning the model: According ot the original blog here are the notable improvements: The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large. The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning. The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants: One can refer to T5's documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model. The original checkpoints can be found here. Running on low resource devices The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use device_map="auto" to make sure you don't have any OOM issue! thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2") inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic'] Inference The inference protocol is exaclty the same as any T5 model, please have a look at the T5's documentation page for more details.
RoCBert Overview The RoCBert model was proposed in RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. It's a pretrained Chinese language model that is robust under various forms of adversarial attacks. The abstract from the paper is the following: Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose ROCBERT: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features are important to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks. This model was contributed by weiweishi. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide RoCBertConfig [[autodoc]] RoCBertConfig - all RoCBertTokenizer [[autodoc]] RoCBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RoCBertModel [[autodoc]] RoCBertModel - forward RoCBertForPreTraining [[autodoc]] RoCBertForPreTraining - forward RoCBertForCausalLM [[autodoc]] RoCBertForCausalLM - forward RoCBertForMaskedLM [[autodoc]] RoCBertForMaskedLM - forward RoCBertForSequenceClassification [[autodoc]] transformers.RoCBertForSequenceClassification - forward RoCBertForMultipleChoice [[autodoc]] transformers.RoCBertForMultipleChoice - forward RoCBertForTokenClassification [[autodoc]] transformers.RoCBertForTokenClassification - forward RoCBertForQuestionAnswering [[autodoc]] RoCBertForQuestionAnswering - forward
BART DISCLAIMER: If you see something strange, file a Github Issue and assign @patrickvonplaten Overview The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract, Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. Tips: BART is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder: mask random tokens (like in BERT) delete random tokens mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token) permute sentences rotate the document to make it start at a specific token This model was contributed by sshleifer. The Authors' code can be found here. Examples Examples and scripts for fine-tuning BART and other models for sequence to sequence tasks can be found in examples/pytorch/summarization/. An example of how to train [BartForConditionalGeneration] with a Hugging Face datasets object can be found in this forum discussion. Distilled checkpoints are described in this paper. Implementation Notes Bart doesn't use token_type_ids for sequence classification. Use [BartTokenizer] or [~BartTokenizer.encode] to get the proper splitting. The forward pass of [BartModel] will create the decoder_input_ids if they are not passed. This is different than some other modeling APIs. A typical use case of this feature is mask filling. Model predictions are intended to be identical to the original implementation when forced_bos_token_id=0. This only works, however, if the string you pass to [fairseq.encode] starts with a space. [~generation.GenerationMixin.generate] should be used for conditional generation tasks like summarization, see the example in that docstrings. Models that load the facebook/bart-large-cnn weights will not have a mask_token_id, or be able to perform mask-filling tasks. Mask Filling The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks. thon from transformers import BartForConditionalGeneration, BartTokenizer model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0) tok = BartTokenizer.from_pretrained("facebook/bart-large") example_english_phrase = "UN Chief Says There Is No in Syria" batch = tok(example_english_phrase, return_tensors="pt") generated_ids = model.generate(batch["input_ids"]) assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [ "UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria" ] Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker. A notebook on how to finetune BART for summarization with fastai using blurr. 🌎 A notebook on how to finetune BART for summarization in two languages with Trainer class. 🌎 [BartForConditionalGeneration] is supported by this example script and notebook. [TFBartForConditionalGeneration] is supported by this example script and notebook. [FlaxBartForConditionalGeneration] is supported by this example script. Summarization chapter of the 🤗 Hugging Face course. Summarization task guide [BartForConditionalGeneration] is supported by this example script and notebook. [TFBartForConditionalGeneration] is supported by this example script and notebook. [FlaxBartForConditionalGeneration] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide A notebook on how to finetune mBART using Seq2SeqTrainer for Hindi to English translation. 🌎 [BartForConditionalGeneration] is supported by this example script and notebook. [TFBartForConditionalGeneration] is supported by this example script and notebook. Translation task guide See also: - Text classification task guide - Question answering task guide - Causal language modeling task guide BartConfig [[autodoc]] BartConfig - all BartTokenizer [[autodoc]] BartTokenizer - all BartTokenizerFast [[autodoc]] BartTokenizerFast - all BartModel [[autodoc]] BartModel - forward BartForConditionalGeneration [[autodoc]] BartForConditionalGeneration - forward BartForSequenceClassification [[autodoc]] BartForSequenceClassification - forward BartForQuestionAnswering [[autodoc]] BartForQuestionAnswering - forward BartForCausalLM [[autodoc]] BartForCausalLM - forward TFBartModel [[autodoc]] TFBartModel - call TFBartForConditionalGeneration [[autodoc]] TFBartForConditionalGeneration - call TFBartForSequenceClassification [[autodoc]] TFBartForSequenceClassification - call FlaxBartModel [[autodoc]] FlaxBartModel - call - encode - decode FlaxBartForConditionalGeneration [[autodoc]] FlaxBartForConditionalGeneration - call - encode - decode FlaxBartForSequenceClassification [[autodoc]] FlaxBartForSequenceClassification - call - encode - decode FlaxBartForQuestionAnswering [[autodoc]] FlaxBartForQuestionAnswering - call - encode - decode FlaxBartForCausalLM [[autodoc]] FlaxBartForCausalLM - call
UL2 Overview The T5 model was presented in Unifying Language Learning Paradigms by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler. The abstract from the paper is the following: Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization. Tips: UL2 is an encoder-decoder model pre-trained on a mixture of denoising functions as well as fine-tuned on an array of downstream tasks. UL2 has the same architecture as T5v1.1 but uses the Gated-SiLU activation function instead of Gated-GELU. The authors release checkpoints of one architecture which can be seen here The original code can be found here. This model was contributed by DanielHesslow.
ESM Overview This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v. Transformer protein language models were introduced in the paper Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. The first version of this paper was preprinted in 2019. ESM-2 outperforms all tested single-sequence protein language models across a range of structure prediction tasks, and enables atomic resolution structure prediction. It was released with the paper Language models of protein sequences at the scale of evolution enable accurate structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido and Alexander Rives. Also introduced in this paper was ESMFold. It uses an ESM-2 stem with a head that can predict folded protein structures with state-of-the-art accuracy. Unlike AlphaFold2, it relies on the token embeddings from the large pre-trained protein language model stem and does not perform a multiple sequence alignment (MSA) step at inference time, which means that ESMFold checkpoints are fully "standalone" - they do not require a database of known protein sequences and structures with associated external query tools to make predictions, and are much faster as a result. The abstract from "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences" is In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction. The abstract from "Language models of protein sequences at the scale of evolution enable accurate structure prediction" is Large language models have recently been shown to develop emergent capabilities with scale, going beyond simple pattern matching to perform higher level reasoning and generate lifelike images and text. While language models trained on protein sequences have been studied at a smaller scale, little is known about what they learn about biology as they are scaled up. In this work we train models up to 15 billion parameters, the largest language models of proteins to be evaluated to date. We find that as models are scaled they learn information enabling the prediction of the three-dimensional structure of a protein at the resolution of individual atoms. We present ESMFold for high accuracy end-to-end atomic level structure prediction directly from the individual sequence of a protein. ESMFold has similar accuracy to AlphaFold2 and RoseTTAFold for sequences with low perplexity that are well understood by the language model. ESMFold inference is an order of magnitude faster than AlphaFold2, enabling exploration of the structural space of metagenomic proteins in practical timescales. Tips: ESM models are trained with a masked language modeling (MLM) objective. The original code can be found here and was was developed by the Fundamental AI Research team at Meta AI. ESM-1b, ESM-1v and ESM-2 were contributed to huggingface by jasonliu and Matt. ESMFold was contributed to huggingface by Matt and Sylvain, with a big thank you to Nikita Smetanin, Roshan Rao and Tom Sercu for their help throughout the process! The HuggingFace port of ESMFold uses portions of the openfold library. The openfold library is licensed under the Apache License 2.0. Documentation resources Text classification task guide Token classification task guide Masked language modeling task guide EsmConfig [[autodoc]] EsmConfig - all EsmTokenizer [[autodoc]] EsmTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary EsmModel [[autodoc]] EsmModel - forward EsmForMaskedLM [[autodoc]] EsmForMaskedLM - forward EsmForSequenceClassification [[autodoc]] EsmForSequenceClassification - forward EsmForTokenClassification [[autodoc]] EsmForTokenClassification - forward EsmForProteinFolding [[autodoc]] EsmForProteinFolding - forward TFEsmModel [[autodoc]] TFEsmModel - call TFEsmForMaskedLM [[autodoc]] TFEsmForMaskedLM - call TFEsmForSequenceClassification [[autodoc]] TFEsmForSequenceClassification - call TFEsmForTokenClassification [[autodoc]] TFEsmForTokenClassification - call
Neighborhood Attention Transformer Overview NAT was proposed in Neighborhood Attention Transformer by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern. The abstract from the paper is the following: *We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision. NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA that boosts image classification and downstream vision performance. Experimental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9% ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. * Tips: - One can use the [AutoImageProcessor] API to prepare images for the model. - NAT can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels). Notes: - NAT depends on NATTEN's implementation of Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment. Neighborhood Attention compared to other attention patterns. Taken from the original paper. This model was contributed by Ali Hassani. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with NAT. [NatForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. NatConfig [[autodoc]] NatConfig NatModel [[autodoc]] NatModel - forward NatForImageClassification [[autodoc]] NatForImageClassification - forward
PoolFormer Overview The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs. Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of transformer models largely stem from the general architecture MetaFormer. The abstract from the paper is the following: Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design. The figure below illustrates the architecture of PoolFormer. Taken from the original paper. Tips: PoolFormer has a hierarchical architecture, where instead of Attention, a simple Average Pooling layer is present. All checkpoints of the model can be found on the hub. One can use [PoolFormerImageProcessor] to prepare images for the model. As most models, PoolFormer comes in different sizes, the details of which can be found in the table below. | Model variant | Depths | Hidden sizes | Params (M) | ImageNet-1k Top 1 | | :---------------: | ------------- | ------------------- | :------------: | :-------------------: | | s12 | [2, 2, 6, 2] | [64, 128, 320, 512] | 12 | 77.2 | | s24 | [4, 4, 12, 4] | [64, 128, 320, 512] | 21 | 80.3 | | s36 | [6, 6, 18, 6] | [64, 128, 320, 512] | 31 | 81.4 | | m36 | [6, 6, 18, 6] | [96, 192, 384, 768] | 56 | 82.1 | | m48 | [8, 8, 24, 8] | [96, 192, 384, 768] | 73 | 82.5 | This model was contributed by heytanay. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with PoolFormer. [PoolFormerForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. PoolFormerConfig [[autodoc]] PoolFormerConfig PoolFormerFeatureExtractor [[autodoc]] PoolFormerFeatureExtractor - call PoolFormerImageProcessor [[autodoc]] PoolFormerImageProcessor - preprocess PoolFormerModel [[autodoc]] PoolFormerModel - forward PoolFormerForImageClassification [[autodoc]] PoolFormerForImageClassification - forward
UMT5 Overview The UMT5 model was proposed in UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. The abstract from the paper is the following: Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling. Tips: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since umT5 was pre-trained in an unsupervise manner, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: google/umt5-small google/umt5-base google/umt5-xl google/umt5-xxl. This model was contributed by agemagician and stefan-it. The original code can be found here. One can refer to T5's documentation page for more tips, code examples and notebooks. Differences with mT5? UmT5 is based on mT5, with a non-shared relative positional bias that is computed for each layer. This means that the model set has_relative_bias for each layer. The conversion script is also different because the model was saved in t5x's latest checkpointing format. Sample usage thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("google/umt5-small") tokenizer = AutoTokenizer.from_pretrained("google/umt5-small") inputs = tokenizer( "A walks into a bar and orders a with pinch of .", return_tensors="pt", ) outputs = model.generate(**inputs) print(tokenizer.batch_decode(outputs)) ['nyone who drink a alcohol A A. This I'] UMT5Config [[autodoc]] UMT5Config UMT5Model [[autodoc]] UMT5Model - forward UMT5ForConditionalGeneration [[autodoc]] UMT5ForConditionalGeneration - forward UMT5EncoderModel [[autodoc]] UMT5EncoderModel - forward UMT5ForQuestionAnswering [[autodoc]] UMT5ForQuestionAnswering - forward
M2M100 Overview The M2M100 model was proposed in Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. The abstract from the paper is the following: Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages. However, much of this work is English-Centric by training only on data which was translated from or to English. While this is supported by large sources of training data, it does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. We build and open source a training dataset that covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively to the best single systems of WMT. We open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model. This model was contributed by valhalla. Training and Generation M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the source and target text. The source text format is [lang_code] X [eos], where lang_code is source language id for source text and target language id for target text, with X being the source or target text. The [M2M100Tokenizer] depends on sentencepiece so be sure to install it before running the examples. To install sentencepiece run pip install sentencepiece. Supervised Training thon from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr") src_text = "Life is like a box of chocolates." tgt_text = "La vie est comme une boîte de chocolat." model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") loss = model(**model_inputs).loss # forward pass Generation M2M100 uses the eos_token_id as the decoder_start_token_id for generation with the target language id being forced as the first generated token. To force the target language id as the first generated token, pass the forced_bos_token_id parameter to the generate method. The following example shows how to translate between Hindi to French and Chinese to English using the facebook/m2m100_418M checkpoint. thon from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "La vie est comme une boîte de chocolat." translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Life is like a box of chocolate." Documentation resources Translation task guide Summarization task guide M2M100Config [[autodoc]] M2M100Config M2M100Tokenizer [[autodoc]] M2M100Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary M2M100Model [[autodoc]] M2M100Model - forward M2M100ForConditionalGeneration [[autodoc]] M2M100ForConditionalGeneration - forward
SegFormer Overview The SegFormer model was proposed in SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on image segmentation benchmarks such as ADE20K and Cityscapes. The abstract from the paper is the following: We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. The figure below illustrates the architecture of SegFormer. Taken from the original paper. This model was contributed by nielsr. The TensorFlow version of the model was contributed by sayakpaul. The original code can be found here. Tips: SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decoder head. [SegformerModel] is the hierarchical Transformer encoder (which in the paper is also referred to as Mix Transformer or MiT). [SegformerForSemanticSegmentation] adds the all-MLP decoder head on top to perform semantic segmentation of images. In addition, there's [SegformerForImageClassification] which can be used to - you guessed it - classify images. The authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be found on the hub. The quickest way to get started with SegFormer is by checking the example notebooks (which showcase both inference and fine-tuning on custom data). One can also check out the blog post introducing SegFormer and illustrating how it can be fine-tuned on custom data. TensorFlow users should refer to this repository that shows off-the-shelf inference and fine-tuning. One can also check out this interactive demo on Hugging Face Spaces to try out a SegFormer model on custom images. SegFormer works on any input size, as it pads the input to be divisible by config.patch_sizes. One can use [SegformerImageProcessor] to prepare images and corresponding segmentation maps for the model. Note that this image processor is fairly basic and does not include all data augmentations used in the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found here. The most important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size, such as 512x512 or 640x640, after which they are normalized. One additional thing to keep in mind is that one can initialize [SegformerImageProcessor] with reduce_labels set to True or False. In some datasets (like ADE20k), the 0 index is used in the annotated segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels. Therefore, reduce_labels is used to reduce all labels by 1, and to make sure no loss is computed for the background class (i.e. it replaces 0 in the annotated maps by 255, which is the ignore_index of the loss function used by [SegformerForSemanticSegmentation]). However, other datasets use the 0 index as background class and include this class as part of all labels. In that case, reduce_labels should be set to False, as loss should also be computed for the background class. As most models, SegFormer comes in different sizes, the details of which can be found in the table below (taken from Table 7 of the original paper). | Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 | | :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: | | MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 | | MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 | | MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 | | MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 | | MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 | | MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 | Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For SegFormer's results on the segmentation datasets like ADE20k, refer to the paper. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer. [SegformerForImageClassification] is supported by this example script and notebook. Image classification task guide Semantic segmentation: [SegformerForSemanticSegmentation] is supported by this example script. A blog on fine-tuning SegFormer on a custom dataset can be found here. More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found here. [TFSegformerForSemanticSegmentation] is supported by this example notebook. Semantic segmentation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. SegformerConfig [[autodoc]] SegformerConfig SegformerFeatureExtractor [[autodoc]] SegformerFeatureExtractor - call - post_process_semantic_segmentation SegformerImageProcessor [[autodoc]] SegformerImageProcessor - preprocess - post_process_semantic_segmentation SegformerModel [[autodoc]] SegformerModel - forward SegformerDecodeHead [[autodoc]] SegformerDecodeHead - forward SegformerForImageClassification [[autodoc]] SegformerForImageClassification - forward SegformerForSemanticSegmentation [[autodoc]] SegformerForSemanticSegmentation - forward TFSegformerDecodeHead [[autodoc]] TFSegformerDecodeHead - call TFSegformerModel [[autodoc]] TFSegformerModel - call TFSegformerForImageClassification [[autodoc]] TFSegformerForImageClassification - call TFSegformerForSemanticSegmentation [[autodoc]] TFSegformerForSemanticSegmentation - call
MRA Overview The MRA model was proposed in Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh. The abstract from the paper is the following: Transformers have emerged as a preferred model for many tasks in natural langugage processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention. This model was contributed by novice03. The original code can be found here. MraConfig [[autodoc]] MraConfig MraModel [[autodoc]] MraModel - forward MraForMaskedLM [[autodoc]] MraForMaskedLM - forward MraForSequenceClassification [[autodoc]] MraForSequenceClassification - forward MraForMultipleChoice [[autodoc]] MraForMultipleChoice - forward MraForTokenClassification [[autodoc]] MraForTokenClassification - forward MraForQuestionAnswering [[autodoc]] MraForQuestionAnswering - forward
SwiftFormer Overview The SwiftFormer model was proposed in SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2. The abstract from the paper is the following: Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2. Tips: - One can use the [ViTImageProcessor] API to prepare images for the model. This model was contributed by shehan97. The original code can be found here. SwiftFormerConfig [[autodoc]] SwiftFormerConfig SwiftFormerModel [[autodoc]] SwiftFormerModel - forward SwiftFormerForImageClassification [[autodoc]] SwiftFormerForImageClassification - forward
ImageGPT Overview The ImageGPT model was proposed in Generative Pretraining from Pixels by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like model trained to predict the next pixel value, allowing for both unconditional and conditional image generation. The abstract from the paper is the following: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0% top-1 accuracy on a linear probe of our features. Summary of the approach. Taken from the original paper. This model was contributed by nielsr, based on this issue. The original code can be found here. Tips: ImageGPT is almost exactly the same as GPT-2, with the exception that a different activation function is used (namely "quick gelu"), and the layer normalization layers don't mean center the inputs. ImageGPT also doesn't have tied input- and output embeddings. As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special "start of sentence" (SOS) token, used at the beginning of every sequence. One can use [ImageGPTImageProcessor] to prepare images for the model. Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly performant image features useful for downstream tasks, such as image classification. The authors showed that the features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as a sklearn logistic regression model for example). This is also referred to as "linear probing". Features can be easily obtained by first forwarding the image through the model, then specifying output_hidden_states=True, and then average-pool the hidden states at whatever layer you like. Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can use [ImageGPTForImageClassification]. ImageGPT comes in different sizes: there's ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also train an XL variant, which they didn't release. The differences in size are summarized in the following table: | Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 | |---|---|---|---|---|---| | MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 | | MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 | | MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 | | MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 | | MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 | | MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 | Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ImageGPT. Demo notebooks for ImageGPT can be found here. [ImageGPTForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ImageGPTConfig [[autodoc]] ImageGPTConfig ImageGPTFeatureExtractor [[autodoc]] ImageGPTFeatureExtractor - __call__ ImageGPTImageProcessor [[autodoc]] ImageGPTImageProcessor - preprocess ImageGPTModel [[autodoc]] ImageGPTModel - forward ImageGPTForCausalImageModeling [[autodoc]] ImageGPTForCausalImageModeling - forward ImageGPTForImageClassification [[autodoc]] ImageGPTForImageClassification - forward
Deformable DETR Overview The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable attention module which only attends to a small set of key sampling points around a reference. The abstract from the paper is the following: DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Tips: One can use [DeformableDetrImageProcessor] to prepare images (and optional targets) for the model. Training Deformable DETR is equivalent to training the original DETR model. See the resources section below for demo notebooks. Deformable DETR architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR. Demo notebooks regarding inference + fine-tuning on a custom dataset for [DeformableDetrForObjectDetection] can be found here. See also: Object detection task guide. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DeformableDetrImageProcessor [[autodoc]] DeformableDetrImageProcessor - preprocess - post_process_object_detection DeformableDetrFeatureExtractor [[autodoc]] DeformableDetrFeatureExtractor - call - post_process_object_detection DeformableDetrConfig [[autodoc]] DeformableDetrConfig DeformableDetrModel [[autodoc]] DeformableDetrModel - forward DeformableDetrForObjectDetection [[autodoc]] DeformableDetrForObjectDetection - forward
HerBERT Overview The HerBERT model was proposed in KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. The abstract from the paper is the following: In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models. Examples of use: thon from transformers import HerbertTokenizer, RobertaModel tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt") outputs = model(encoded_input) HerBERT can also be loaded using AutoTokenizer and AutoModel: import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") This model was contributed by rmroczkowski. The original code can be found here. HerbertTokenizer [[autodoc]] HerbertTokenizer HerbertTokenizerFast [[autodoc]] HerbertTokenizerFast
TAPEX This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: pip install -U transformers==4.30.0. Overview The TAPEX model was proposed in TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking. TAPEX has been fine-tuned on several datasets: - SQA (Sequential Question Answering by Microsoft) - WTQ (Wiki Table Questions by Stanford University) - WikiSQL (by Salesforce) - TabFact (by USCB NLP Lab). The abstract from the paper is the following: Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks. Tips: TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model. TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact. Sentences + tables are presented to the model as sentence + " " + linearized table. The linearized table has the following format: col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : . TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer, and it will automatically create the input_ids and attention_mask (as shown in the usage examples below). Usage: inference Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model. We use the Auto API, which will automatically instantiate the appropriate tokenizer ([TapexTokenizer]) and model ([BartForConditionalGeneration]) for us, based on the configuration file of the checkpoint on the hub. thon from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import pandas as pd tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq") model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-large-finetuned-wtq") prepare table + question data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) question = "how many movies does Leonardo Di Caprio have?" encoding = tokenizer(table, question, return_tensors="pt") let the model generate an answer autoregressively outputs = model.generate(**encoding) decode back to text predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] print(predicted_answer) 53 Note that [TapexTokenizer] also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table and multiple questions, or a batch of a single query and multiple tables. Let's illustrate this: thon prepare table + question data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) questions = [ "how many movies does Leonardo Di Caprio have?", "which actor has 69 movies?", "what's the first name of the actor who has 87 movies?", ] encoding = tokenizer(table, questions, padding=True, return_tensors="pt") let the model generate an answer autoregressively outputs = model.generate(**encoding) decode back to text tokenizer.batch_decode(outputs, skip_special_tokens=True) [' 53', ' george clooney', ' brad pitt'] In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents of a table), one can instantiate a [BartForSequenceClassification] model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the Auto API. thon from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact") model = AutoModelForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact") prepare table + sentence data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) sentence = "George Clooney has 30 movies" encoding = tokenizer(table, sentence, return_tensors="pt") forward pass outputs = model(**encoding) print prediction predicted_class_idx = outputs.logits[0].argmax(dim=0).item() print(model.config.id2label[predicted_class_idx]) Refused TapexTokenizer [[autodoc]] TapexTokenizer - call - save_vocabulary
MobileViT Overview The MobileViT model was proposed in MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari. MobileViT introduces a new layer that replaces local processing in convolutions with global processing using transformers. The abstract from the paper is the following: Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters. Tips: MobileViT is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. You can follow this tutorial for a lightweight introduction. One can use [MobileViTImageProcessor] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC. As the name suggests MobileViT was designed to be performant and efficient on mobile phones. The TensorFlow versions of the MobileViT models are fully compatible with TensorFlow Lite. You can use the following code to convert a MobileViT checkpoint (be it image classification or semantic segmentation) to generate a TensorFlow Lite model: from transformers import TFMobileViTForImageClassification import tensorflow as tf model_ckpt = "apple/mobilevit-xx-small" model = TFMobileViTForImageClassification.from_pretrained(model_ckpt) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS, ] tflite_model = converter.convert() tflite_filename = model_ckpt.split("/")[-1] + ".tflite" with open(tflite_filename, "wb") as f: f.write(tflite_model) The resulting model will be just about an MB making it a good fit for mobile applications where resources and network bandwidth can be constrained. This model was contributed by matthijs. The TensorFlow version of the model was contributed by sayakpaul. The original code and weights can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileViT. [MobileViTForImageClassification] is supported by this example script and notebook. See also: Image classification task guide Semantic segmentation - Semantic segmentation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. MobileViTConfig [[autodoc]] MobileViTConfig MobileViTFeatureExtractor [[autodoc]] MobileViTFeatureExtractor - call - post_process_semantic_segmentation MobileViTImageProcessor [[autodoc]] MobileViTImageProcessor - preprocess - post_process_semantic_segmentation MobileViTModel [[autodoc]] MobileViTModel - forward MobileViTForImageClassification [[autodoc]] MobileViTForImageClassification - forward MobileViTForSemanticSegmentation [[autodoc]] MobileViTForSemanticSegmentation - forward TFMobileViTModel [[autodoc]] TFMobileViTModel - call TFMobileViTForImageClassification [[autodoc]] TFMobileViTForImageClassification - call TFMobileViTForSemanticSegmentation [[autodoc]] TFMobileViTForSemanticSegmentation - call
SEW-D Overview SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. Tips: SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. This model was contributed by anton-l. Documentation resources Audio classification task guide Automatic speech recognition task guide SEWDConfig [[autodoc]] SEWDConfig SEWDModel [[autodoc]] SEWDModel - forward SEWDForCTC [[autodoc]] SEWDForCTC - forward SEWDForSequenceClassification [[autodoc]] SEWDForSequenceClassification - forward
YOLOS Overview The YOLOS model was proposed in You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. YOLOS proposes to just leverage the plain Vision Transformer (ViT) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN. The abstract from the paper is the following: Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. Tips: One can use [YolosImageProcessor] for preparing images (and optional targets) for the model. Contrary to DETR, YOLOS doesn't require a pixel_mask to be created. YOLOS architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS. All example notebooks illustrating inference + fine-tuning [YolosForObjectDetection] on a custom dataset can be found here. See also: Object detection task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. YolosConfig [[autodoc]] YolosConfig YolosImageProcessor [[autodoc]] YolosImageProcessor - preprocess - pad - post_process_object_detection YolosFeatureExtractor [[autodoc]] YolosFeatureExtractor - call - pad - post_process_object_detection YolosModel [[autodoc]] YolosModel - forward YolosForObjectDetection [[autodoc]] YolosForObjectDetection - forward
PLBart DISCLAIMER: If you see something strange, file a Github Issue and assign @gchhablani. Overview of PLBart The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task on Java, Python and English. According to the abstract Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations. This model was contributed by gchhablani. The Authors' code can be found here. Training of PLBart PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The target text format is [tgt_lang_code] X [eos]. bos is never used. However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to the paper to learn more about this. In cases where the language code is needed, the regular [~PLBartTokenizer.__call__] will encode source text format when you pass texts as the first argument or with the keyword argument text, and will encode target text format if it's passed with the text_target keyword argument. Supervised training thon from transformers import PLBartForConditionalGeneration, PLBartTokenizer tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python") example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])" expected_translation_english = "Returns the maximum value of a b c." inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt") model(**inputs) Generation While generating the target text set the decoder_start_token_id to the target language id. The following example shows how to translate Python to English using the uclanlp/plbart-python-en_XX model. thon from transformers import PLBartForConditionalGeneration, PLBartTokenizer tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX") example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])" inputs = tokenizer(example_python_phrase, return_tensors="pt") model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX") translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"]) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Returns the maximum value of a b c." Documentation resources Text classification task guide Causal language modeling task guide Translation task guide Summarization task guide PLBartConfig [[autodoc]] PLBartConfig PLBartTokenizer [[autodoc]] PLBartTokenizer - build_inputs_with_special_tokens PLBartModel [[autodoc]] PLBartModel - forward PLBartForConditionalGeneration [[autodoc]] PLBartForConditionalGeneration - forward PLBartForSequenceClassification [[autodoc]] PLBartForSequenceClassification - forward PLBartForCausalLM [[autodoc]] PLBartForCausalLM - forward
BERT Overview The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia. The abstract from the paper is the following: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement). Tips: BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by: a special mask token with probability 0.8 a random token different from the one masked with probability 0.1 the same token with probability 0.1 The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not. This model was contributed by thomwolf. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on BERT Text Classification in a different language. A notebook for Finetuning BERT (and friends) for multi-label text classification. A notebook on how to Finetune BERT for multi-label classification using PyTorch. 🌎 A notebook on how to warm-start an EncoderDecoder model with BERT for summarization. [BertForSequenceClassification] is supported by this example script and notebook. [TFBertForSequenceClassification] is supported by this example script and notebook. [FlaxBertForSequenceClassification] is supported by this example script and notebook. Text classification task guide A blog post on how to use Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition. A notebook for Finetuning BERT for named-entity recognition using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this version of the notebook instead. [BertForTokenClassification] is supported by this example script and notebook. [TFBertForTokenClassification] is supported by this example script and notebook. [FlaxBertForTokenClassification] is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Token classification task guide [BertForMaskedLM] is supported by this example script and notebook. [TFBertForMaskedLM] is supported by this example script and notebook. [FlaxBertForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide [BertForQuestionAnswering] is supported by this example script and notebook. [TFBertForQuestionAnswering] is supported by this example script and notebook. [FlaxBertForQuestionAnswering] is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide Multiple choice - [BertForMultipleChoice] is supported by this example script and notebook. - [TFBertForMultipleChoice] is supported by this example script and notebook. - Multiple choice task guide ⚡️ Inference - A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia. - A blog post on how to Accelerate BERT inference with DeepSpeed-Inference on GPUs. ⚙️ Pretraining - A blog post on Pre-Training BERT with Hugging Face Transformers and Habana Gaudi. 🚀 Deploy - A blog post on how to Convert Transformers to ONNX with Hugging Face Optimum. - A blog post on how to Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS. - A blog post on Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module. - A blog post on Serverless BERT with HuggingFace, AWS Lambda, and Docker. - A blog post on Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler. - A blog post on Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker. BertConfig [[autodoc]] BertConfig - all BertTokenizer [[autodoc]] BertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary BertTokenizerFast [[autodoc]] BertTokenizerFast TFBertTokenizer [[autodoc]] TFBertTokenizer Bert specific outputs [[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput [[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput [[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput BertModel [[autodoc]] BertModel - forward BertForPreTraining [[autodoc]] BertForPreTraining - forward BertLMHeadModel [[autodoc]] BertLMHeadModel - forward BertForMaskedLM [[autodoc]] BertForMaskedLM - forward BertForNextSentencePrediction [[autodoc]] BertForNextSentencePrediction - forward BertForSequenceClassification [[autodoc]] BertForSequenceClassification - forward BertForMultipleChoice [[autodoc]] BertForMultipleChoice - forward BertForTokenClassification [[autodoc]] BertForTokenClassification - forward BertForQuestionAnswering [[autodoc]] BertForQuestionAnswering - forward TFBertModel [[autodoc]] TFBertModel - call TFBertForPreTraining [[autodoc]] TFBertForPreTraining - call TFBertModelLMHeadModel [[autodoc]] TFBertLMHeadModel - call TFBertForMaskedLM [[autodoc]] TFBertForMaskedLM - call TFBertForNextSentencePrediction [[autodoc]] TFBertForNextSentencePrediction - call TFBertForSequenceClassification [[autodoc]] TFBertForSequenceClassification - call TFBertForMultipleChoice [[autodoc]] TFBertForMultipleChoice - call TFBertForTokenClassification [[autodoc]] TFBertForTokenClassification - call TFBertForQuestionAnswering [[autodoc]] TFBertForQuestionAnswering - call FlaxBertModel [[autodoc]] FlaxBertModel - call FlaxBertForPreTraining [[autodoc]] FlaxBertForPreTraining - call FlaxBertForCausalLM [[autodoc]] FlaxBertForCausalLM - call FlaxBertForMaskedLM [[autodoc]] FlaxBertForMaskedLM - call FlaxBertForNextSentencePrediction [[autodoc]] FlaxBertForNextSentencePrediction - call FlaxBertForSequenceClassification [[autodoc]] FlaxBertForSequenceClassification - call FlaxBertForMultipleChoice [[autodoc]] FlaxBertForMultipleChoice - call FlaxBertForTokenClassification [[autodoc]] FlaxBertForTokenClassification - call FlaxBertForQuestionAnswering [[autodoc]] FlaxBertForQuestionAnswering - call
MGP-STR Overview The MGP-STR model was proposed in Multi-Granularity Prediction for Scene Text Recognition by Peng Wang, Cheng Da, and Cong Yao. MGP-STR is a conceptually simple yet powerful vision Scene Text Recognition (STR) model, which is built upon the Vision Transformer (ViT). To integrate linguistic knowledge, Multi-Granularity Prediction (MGP) strategy is proposed to inject information from the language modality into the model in an implicit way. The abstract from the paper is the following: Scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this challenging problem, numerous innovative methods have been successively proposed and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. The resultant algorithm (termed MGP-STR) is able to push the performance envelop of STR to an even higher level. Specifically, it achieves an average recognition accuracy of 93.35% on standard benchmarks. MGP-STR architecture. Taken from the original paper. Tips: MGP-STR is trained on two synthetic datasets MJSynth (MJ) and SynthText(http://www.robots.ox.ac.uk/~vgg/data/scenetext/) (ST) without fine-tuning on other datasets. It achieves state-of-the-art results on six standard Latin scene text benchmarks, including 3 regular text datasets (IC13, SVT, IIIT) and 3 irregular ones (IC15, SVTP, CUTE). This model was contributed by yuekun. The original code can be found here. Inference [MgpstrModel] accepts images as input and generates three types of predictions, which represent textual information at different granularities. The three types of predictions are fused to give the final prediction result. The [ViTImageProcessor] class is responsible for preprocessing the input image and [MgpstrTokenizer] decodes the generated character tokens to the target string. The [MgpstrProcessor] wraps [ViTImageProcessor] and [MgpstrTokenizer] into a single instance to both extract the input features and decode the predicted token ids. Step-by-step Optical Character Recognition (OCR) ``` py from transformers import MgpstrProcessor, MgpstrForSceneTextRecognition import requests from PIL import Image processor = MgpstrProcessor.from_pretrained('alibaba-damo/mgp-str-base') model = MgpstrForSceneTextRecognition.from_pretrained('alibaba-damo/mgp-str-base') load image from the IIIT-5k dataset url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(images=image, return_tensors="pt").pixel_values outputs = model(pixel_values) generated_text = processor.batch_decode(outputs.logits)['generated_text'] MgpstrConfig [[autodoc]] MgpstrConfig MgpstrTokenizer [[autodoc]] MgpstrTokenizer - save_vocabulary MgpstrProcessor [[autodoc]] MgpstrProcessor - call - batch_decode MgpstrModel [[autodoc]] MgpstrModel - forward MgpstrForSceneTextRecognition [[autodoc]] MgpstrForSceneTextRecognition - forward
QDQBERT Overview The QDQBERT model can be referenced in Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. The abstract from the paper is the following: Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large. Tips: QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model. QDQBERT requires the dependency of Pytorch Quantization Toolkit. To install pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example bert-base-uncased), and perform Quantization Aware Training/Post Training Quantization. A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for SQUAD task can be found at transformers/examples/research_projects/quantization-qdqbert/. This model was contributed by shangz. Set default quantizers QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by TensorQuantizer in Pytorch Quantization Toolkit. TensorQuantizer is the module for quantizing tensors, with QuantDescriptor defining how the tensor should be quantized. Refer to Pytorch Quantization Toolkit userguide for more details. Before creating QDQBERT model, one has to set the default QuantDescriptor defining default tensor quantizers. Example: thon import pytorch_quantization.nn as quant_nn from pytorch_quantization.tensor_quant import QuantDescriptor The default tensor quantizer is set to use Max calibration method input_desc = QuantDescriptor(num_bits=8, calib_method="max") The default tensor quantizer is set to be per-channel quantization for weights weight_desc = QuantDescriptor(num_bits=8, axis=((0,))) quant_nn.QuantLinear.set_default_quant_desc_input(input_desc) quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc) Calibration Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model: thon Find the TensorQuantizer and enable calibration for name, module in model.named_modules(): if name.endswith("_input_quantizer"): module.enable_calib() module.disable_quant() # Use full precision data to calibrate Feeding data samples model(x) Finalize calibration for name, module in model.named_modules(): if name.endswith("_input_quantizer"): module.load_calib_amax() module.enable_quant() If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process model.cuda() Keep running the quantized model Export to ONNX The goal of exporting to ONNX is to deploy inference by TensorRT. Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in torch.onnx. Example: thon from pytorch_quantization.nn import TensorQuantizer TensorQuantizer.use_fb_fake_quant = True Load the calibrated model ONNX export torch.onnx.export() Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide QDQBertConfig [[autodoc]] QDQBertConfig QDQBertModel [[autodoc]] QDQBertModel - forward QDQBertLMHeadModel [[autodoc]] QDQBertLMHeadModel - forward QDQBertForMaskedLM [[autodoc]] QDQBertForMaskedLM - forward QDQBertForSequenceClassification [[autodoc]] QDQBertForSequenceClassification - forward QDQBertForNextSentencePrediction [[autodoc]] QDQBertForNextSentencePrediction - forward QDQBertForMultipleChoice [[autodoc]] QDQBertForMultipleChoice - forward QDQBertForTokenClassification [[autodoc]] QDQBertForTokenClassification - forward QDQBertForQuestionAnswering [[autodoc]] QDQBertForQuestionAnswering - forward
TimeSformer Overview The TimeSformer model was proposed in TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Facebook Research. This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification papers. The abstract from the paper is the following: We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: this https URL. Tips: There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover, the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model. This model was contributed by fcakyon. The original code can be found here. Documentation resources Video classification task guide TimesformerConfig [[autodoc]] TimesformerConfig TimesformerModel [[autodoc]] TimesformerModel - forward TimesformerForVideoClassification [[autodoc]] TimesformerForVideoClassification - forward
XLM-RoBERTa-XL Overview The XLM-RoBERTa-XL model was proposed in Larger-Scale Transformers for Multilingual Masked Language Modeling by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following: Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available. Tips: XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require lang tensors to understand which language is used, and should be able to determine the correct language from the input ids. This model was contributed by Soonhwan-Kwon and stefan-it. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide XLMRobertaXLConfig [[autodoc]] XLMRobertaXLConfig XLMRobertaXLModel [[autodoc]] XLMRobertaXLModel - forward XLMRobertaXLForCausalLM [[autodoc]] XLMRobertaXLForCausalLM - forward XLMRobertaXLForMaskedLM [[autodoc]] XLMRobertaXLForMaskedLM - forward XLMRobertaXLForSequenceClassification [[autodoc]] XLMRobertaXLForSequenceClassification - forward XLMRobertaXLForMultipleChoice [[autodoc]] XLMRobertaXLForMultipleChoice - forward XLMRobertaXLForTokenClassification [[autodoc]] XLMRobertaXLForTokenClassification - forward XLMRobertaXLForQuestionAnswering [[autodoc]] XLMRobertaXLForQuestionAnswering - forward
RoBERTa Overview The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. The abstract from the paper is the following: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code. Tips: This implementation is the same as [BertModel] with a tiny embeddings tweak as well as a setup for Roberta pretrained models. RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme. RoBERTa doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or </s>) Same as BERT with better pretraining tricks: dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all together to reach 512 tokens (so the sentences are in an order than may span several documents) train with larger batches use BPE with bytes as a subunit and not characters (because of unicode characters) CamemBERT is a wrapper around RoBERTa. Refer to this page for usage examples. This model was contributed by julien-c. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog on Getting Started with Sentiment Analysis on Twitter using RoBERTa and the Inference API. A blog on Opinion Classification with Kili and Hugging Face AutoTrain using RoBERTa. A notebook on how to finetune RoBERTa for sentiment analysis. 🌎 [RobertaForSequenceClassification] is supported by this example script and notebook. [TFRobertaForSequenceClassification] is supported by this example script and notebook. [FlaxRobertaForSequenceClassification] is supported by this example script and notebook. Text classification task guide [RobertaForTokenClassification] is supported by this example script and notebook. [TFRobertaForTokenClassification] is supported by this example script and notebook. [FlaxRobertaForTokenClassification] is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Token classification task guide A blog on How to train a new language model from scratch using Transformers and Tokenizers with RoBERTa. [RobertaForMaskedLM] is supported by this example script and notebook. [TFRobertaForMaskedLM] is supported by this example script and notebook. [FlaxRobertaForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide A blog on Accelerated Inference with Optimum and Transformers Pipelines with RoBERTa for question answering. [RobertaForQuestionAnswering] is supported by this example script and notebook. [TFRobertaForQuestionAnswering] is supported by this example script and notebook. [FlaxRobertaForQuestionAnswering] is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide Multiple choice - [RobertaForMultipleChoice] is supported by this example script and notebook. - [TFRobertaForMultipleChoice] is supported by this example script and notebook. - Multiple choice task guide RobertaConfig [[autodoc]] RobertaConfig RobertaTokenizer [[autodoc]] RobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RobertaTokenizerFast [[autodoc]] RobertaTokenizerFast - build_inputs_with_special_tokens RobertaModel [[autodoc]] RobertaModel - forward RobertaForCausalLM [[autodoc]] RobertaForCausalLM - forward RobertaForMaskedLM [[autodoc]] RobertaForMaskedLM - forward RobertaForSequenceClassification [[autodoc]] RobertaForSequenceClassification - forward RobertaForMultipleChoice [[autodoc]] RobertaForMultipleChoice - forward RobertaForTokenClassification [[autodoc]] RobertaForTokenClassification - forward RobertaForQuestionAnswering [[autodoc]] RobertaForQuestionAnswering - forward TFRobertaModel [[autodoc]] TFRobertaModel - call TFRobertaForCausalLM [[autodoc]] TFRobertaForCausalLM - call TFRobertaForMaskedLM [[autodoc]] TFRobertaForMaskedLM - call TFRobertaForSequenceClassification [[autodoc]] TFRobertaForSequenceClassification - call TFRobertaForMultipleChoice [[autodoc]] TFRobertaForMultipleChoice - call TFRobertaForTokenClassification [[autodoc]] TFRobertaForTokenClassification - call TFRobertaForQuestionAnswering [[autodoc]] TFRobertaForQuestionAnswering - call FlaxRobertaModel [[autodoc]] FlaxRobertaModel - call FlaxRobertaForCausalLM [[autodoc]] FlaxRobertaForCausalLM - call FlaxRobertaForMaskedLM [[autodoc]] FlaxRobertaForMaskedLM - call FlaxRobertaForSequenceClassification [[autodoc]] FlaxRobertaForSequenceClassification - call FlaxRobertaForMultipleChoice [[autodoc]] FlaxRobertaForMultipleChoice - call FlaxRobertaForTokenClassification [[autodoc]] FlaxRobertaForTokenClassification - call FlaxRobertaForQuestionAnswering [[autodoc]] FlaxRobertaForQuestionAnswering - call
BridgeTower Overview The BridgeTower model was proposed in BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs. This paper has been accepted to the AAAI'23 conference. The abstract from the paper is the following: Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. BridgeTower architecture. Taken from the original paper. Usage BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers. The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder. In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture. The [BridgeTowerProcessor] wraps [RobertaTokenizer] and [BridgeTowerImageProcessor] into a single instance to both encode the text and prepare the images respectively. The following example shows how to run contrastive learning using [BridgeTowerProcessor] and [BridgeTowerForContrastiveLearning]. thon from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") forward pass scores = dict() for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs The following example shows how to run image-text retrieval using [BridgeTowerProcessor] and [BridgeTowerForImageAndTextRetrieval]. thon from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") forward pass scores = dict() for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0, 1].item() The following example shows how to run masked language modeling using [BridgeTowerProcessor] and [BridgeTowerForMaskedLM]. thon from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000360943.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") text = "a looking out of the window" processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") prepare inputs encoding = processor(image, text, return_tensors="pt") forward pass outputs = model(**encoding) results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) print(results) .a cat looking out of the window. This model was contributed by Anahita Bhiwandiwalla, Tiep Le and Shaoyen Tseng. The original code can be found here. Tips: This implementation of BridgeTower uses [RobertaTokenizer] to generate text embeddings and OpenAI's CLIP/ViT model to compute visual embeddings. Checkpoints for pre-trained bridgeTower-base and bridgetower masked language modeling and image text matching are released. Please refer to Table 5 for BridgeTower's performance on Image Retrieval and other down stream tasks. The PyTorch version of this model is only available in torch 1.10 and higher. BridgeTowerConfig [[autodoc]] BridgeTowerConfig BridgeTowerTextConfig [[autodoc]] BridgeTowerTextConfig BridgeTowerVisionConfig [[autodoc]] BridgeTowerVisionConfig BridgeTowerImageProcessor [[autodoc]] BridgeTowerImageProcessor - preprocess BridgeTowerProcessor [[autodoc]] BridgeTowerProcessor - call BridgeTowerModel [[autodoc]] BridgeTowerModel - forward BridgeTowerForContrastiveLearning [[autodoc]] BridgeTowerForContrastiveLearning - forward BridgeTowerForMaskedLM [[autodoc]] BridgeTowerForMaskedLM - forward BridgeTowerForImageAndTextRetrieval [[autodoc]] BridgeTowerForImageAndTextRetrieval - forward
OneFormer Overview The OneFormer model was proposed in OneFormer: One Transformer to Rule Universal Image Segmentation by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference. The abstract from the paper is the following: Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible. Tips: - OneFormer requires two inputs during inference: image and task token. - During training, OneFormer only uses panoptic annotations. - If you want to train the model in a distributed environment across multiple nodes, then one should update the get_num_masks function inside in the OneFormerLoss class of modeling_oneformer.py. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation here. - One can use [OneFormerProcessor] to prepare input images and task inputs for the model and optional targets for the model. [OneformerProcessor] wraps [OneFormerImageProcessor] and [CLIPTokenizer] into a single instance to both prepare the images and encode the task inputs. - To get the final segmentation, depending on the task, you can call [~OneFormerProcessor.post_process_semantic_segmentation] or [~OneFormerImageProcessor.post_process_instance_segmentation] or [~OneFormerImageProcessor.post_process_panoptic_segmentation]. All three tasks can be solved using [OneFormerForUniversalSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together. The figure below illustrates the architecture of OneFormer. Taken from the original paper. This model was contributed by Jitesh Jain. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer. Demo notebooks regarding inference + fine-tuning on custom data can be found here. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. OneFormer specific outputs [[autodoc]] models.oneformer.modeling_oneformer.OneFormerModelOutput [[autodoc]] models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput OneFormerConfig [[autodoc]] OneFormerConfig OneFormerImageProcessor [[autodoc]] OneFormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation OneFormerProcessor [[autodoc]] OneFormerProcessor OneFormerModel [[autodoc]] OneFormerModel - forward OneFormerForUniversalSegmentation [[autodoc]] OneFormerForUniversalSegmentation - forward
MaskFormer This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a Github Issue. Overview The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification. The abstract from the paper is the following: Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models. Tips: - MaskFormer's Transformer decoder is identical to the decoder of DETR. During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter use_auxilary_loss of [MaskFormerConfig] to True, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). - If you want to train the model in a distributed environment across multiple nodes, then one should update the get_num_masks function inside in the MaskFormerLoss class of modeling_maskformer.py. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation here. - One can use [MaskFormerImageProcessor] to prepare images for the model and optional targets for the model. - To get the final segmentation, depending on the task, you can call [~MaskFormerImageProcessor.post_process_semantic_segmentation] or [~MaskFormerImageProcessor.post_process_panoptic_segmentation]. Both tasks can be solved using [MaskFormerForInstanceSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together. The figure below illustrates the architecture of MaskFormer. Taken from the original paper. This model was contributed by francesco. The original code can be found here. Resources All notebooks that illustrate inference as well as fine-tuning on custom data with MaskFormer can be found here. MaskFormer specific outputs [[autodoc]] models.maskformer.modeling_maskformer.MaskFormerModelOutput [[autodoc]] models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput MaskFormerConfig [[autodoc]] MaskFormerConfig MaskFormerImageProcessor [[autodoc]] MaskFormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation MaskFormerFeatureExtractor [[autodoc]] MaskFormerFeatureExtractor - call - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation MaskFormerModel [[autodoc]] MaskFormerModel - forward MaskFormerForInstanceSegmentation [[autodoc]] MaskFormerForInstanceSegmentation - forward
Encoder Decoder Models Overview The [EncoderDecoderModel] can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. After such an [EncoderDecoderModel] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained [BertModel] as the encoder and decoder for a summarization model as was shown in: Text Summarization with Pretrained Encoders by Yang Liu and Mirella Lapata. Randomly initializing EncoderDecoderModel from model configurations. [EncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [BertModel] configuration for the encoder and the default [BertForCausalLM] configuration for the decoder. thon from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel config_encoder = BertConfig() config_decoder = BertConfig() config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) model = EncoderDecoderModel(config=config) Initialising EncoderDecoderModel from a pretrained encoder and a pretrained decoder. [EncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, e.g. BERT, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [EncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post. To do so, the EncoderDecoderModel class provides a [EncoderDecoderModel.from_encoder_decoder_pretrained] method. thon from transformers import EncoderDecoderModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") Loading an existing EncoderDecoderModel checkpoint and perform inference. To load fine-tuned checkpoints of the EncoderDecoderModel class, [EncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers. To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon from transformers import AutoTokenizer, EncoderDecoderModel load a fine-tuned seq2seq model and corresponding tokenizer model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") let's perform inference on a long piece of text ARTICLE_TO_SUMMARIZE = ( "PG&E stated it scheduled the blackouts in response to forecasts for high winds " "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ) input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids autoregressively generate summary (uses greedy decoding by default) generated_ids = model.generate(input_ids) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. Loading a PyTorch checkpoint into TFEncoderDecoderModel. [TFEncoderDecoderModel.from_pretrained] currently doesn't support initializing the model from a pytorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only pytorch checkpoints for a particular encoder-decoder model, a workaround is: thon a workaround to load from pytorch checkpoint from transformers import EncoderDecoderModel, TFEncoderDecoderModel _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) This is only for copying some specific attributes of this particular model. model.config = _model.config Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model. As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded target sequence). thon from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id input_ids = tokenizer( "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.", return_tensors="pt", ).input_ids labels = tokenizer( "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.", return_tensors="pt", ).input_ids the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss Detailed colab for training. This model was contributed by thomwolf. This model's TensorFlow and Flax versions were contributed by ydshieh. EncoderDecoderConfig [[autodoc]] EncoderDecoderConfig EncoderDecoderModel [[autodoc]] EncoderDecoderModel - forward - from_encoder_decoder_pretrained TFEncoderDecoderModel [[autodoc]] TFEncoderDecoderModel - call - from_encoder_decoder_pretrained FlaxEncoderDecoderModel [[autodoc]] FlaxEncoderDecoderModel - call - from_encoder_decoder_pretrained
mLUKE Overview The mLUKE model was proposed in mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension of the LUKE model trained on the basis of XLM-RoBERTa. It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive question answering, relation classification, cloze-style knowledge completion. The abstract from the paper is the following: Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations. One can directly plug in the weights of mLUKE into a LUKE model, like so: thon from transformers import LukeModel model = LukeModel.from_pretrained("studio-ousia/mluke-base") Note that mLUKE has its own tokenizer, [MLukeTokenizer]. You can initialize it as follows: thon from transformers import MLukeTokenizer tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base") As mLUKE's architecture is equivalent to that of LUKE, one can refer to LUKE's documentation page for all tips, code examples and notebooks. This model was contributed by ryo0634. The original code can be found here. MLukeTokenizer [[autodoc]] MLukeTokenizer - call - save_vocabulary
CamemBERT Overview The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook's RoBERTa model released in 2019. It is a model trained on 138GB of French text. The abstract from the paper is the following: Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models --in all languages except English-- very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP. Tips: This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs. This model was contributed by camembert. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide CamembertConfig [[autodoc]] CamembertConfig CamembertTokenizer [[autodoc]] CamembertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary CamembertTokenizerFast [[autodoc]] CamembertTokenizerFast CamembertModel [[autodoc]] CamembertModel CamembertForCausalLM [[autodoc]] CamembertForCausalLM CamembertForMaskedLM [[autodoc]] CamembertForMaskedLM CamembertForSequenceClassification [[autodoc]] CamembertForSequenceClassification CamembertForMultipleChoice [[autodoc]] CamembertForMultipleChoice CamembertForTokenClassification [[autodoc]] CamembertForTokenClassification CamembertForQuestionAnswering [[autodoc]] CamembertForQuestionAnswering TFCamembertModel [[autodoc]] TFCamembertModel TFCamembertForCasualLM [[autodoc]] TFCamembertForCausalLM TFCamembertForMaskedLM [[autodoc]] TFCamembertForMaskedLM TFCamembertForSequenceClassification [[autodoc]] TFCamembertForSequenceClassification TFCamembertForMultipleChoice [[autodoc]] TFCamembertForMultipleChoice TFCamembertForTokenClassification [[autodoc]] TFCamembertForTokenClassification TFCamembertForQuestionAnswering [[autodoc]] TFCamembertForQuestionAnswering
BARThez Overview The BARThez model was proposed in BARThez: a Skilled Pretrained French Sequence-to-Sequence Model by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct, 2020. The abstract of the paper: Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing (NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language understanding tasks. While there are some notable exceptions, most of the available models and research have been conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language (to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez, provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT. This model was contributed by moussakam. The Authors' code can be found here. Examples BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check: examples/pytorch/summarization/. BarthezTokenizer [[autodoc]] BarthezTokenizer BarthezTokenizerFast [[autodoc]] BarthezTokenizerFast
CLIPSeg Overview The CLIPSeg model was proposed in Image Segmentation Using Text and Image Prompts by Timo Lüddecke and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen CLIP model for zero- and one-shot image segmentation. The abstract from the paper is the following: Image segmentation is usually addressed by training a model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system that can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text or an image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation. We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense prediction. After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail. This novel hybrid input allows for dynamic adaptation not only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query can be formulated. Finally, we find our system to adapt well to generalized queries involving affordances or properties Tips: [CLIPSegForImageSegmentation] adds a decoder on top of [CLIPSegModel]. The latter is identical to [CLIPModel]. [CLIPSegForImageSegmentation] can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text (provided to the model as input_ids) or an image (provided to the model as conditional_pixel_values). One can also provide custom conditional embeddings (provided to the model as conditional_embeddings). CLIPSeg overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIPSeg. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A notebook that illustrates zero-shot image segmentation with CLIPSeg. CLIPSegConfig [[autodoc]] CLIPSegConfig - from_text_vision_configs CLIPSegTextConfig [[autodoc]] CLIPSegTextConfig CLIPSegVisionConfig [[autodoc]] CLIPSegVisionConfig CLIPSegProcessor [[autodoc]] CLIPSegProcessor CLIPSegModel [[autodoc]] CLIPSegModel - forward - get_text_features - get_image_features CLIPSegTextModel [[autodoc]] CLIPSegTextModel - forward CLIPSegVisionModel [[autodoc]] CLIPSegVisionModel - forward CLIPSegForImageSegmentation [[autodoc]] CLIPSegForImageSegmentation - forward
T5 Overview The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. The abstract from the paper is the following: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. Tips: T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: , for summarization: summarize: . The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above). Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens. T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right. See the training, inference and scripts sections below for all details regarding usage. T5 comes in different sizes: t5-small t5-base t5-large t5-3b t5-11b. Based on the original T5 model, Google has released some follow-up works: T5v1.1: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found here. mT5: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to the documentation of mT5 which can be found here. byT5: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer to the documentation of byT5 which can be found here. UL2: UL2 is a T5 like model pretrained on various denoising objectives Flan-T5: Flan is a pretraining methods that is based on prompting. The Flan-T5 are T5 models trained on the Flan collection of datasets which include: taskmaster2, djaym7/wiki_dialog, deepmind/code_contests, lambada, gsm8k, aqua_rat, esnli, quasc and qed. FLan-UL2 : the UL2 model finetuned using the "Flan" prompt tuning and dataset collection. UMT5: UmT5 is a multilingual T5 model trained on an improved and refreshed mC4 multilingual corpus, 29 trillion characters across 107 language, using a new sampling method, UniMax. Refer to the documentation of mT5 which can be found here. All checkpoints can be found on the hub. This model was contributed by thomwolf. The original code can be found here. Training T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input sequence is fed to the model using input_ids. The target sequence is shifted to the right, i.e., prepended by a start-sequence token and fed to the decoder using the decoder_input_ids. In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the labels. The PAD token is hereby used as the start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion. One can use [T5ForConditionalGeneration] (or the Tensorflow/Flax variant), which includes the language modeling head on top of the decoder. Unsupervised denoising training In this setup, spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. Each sentinel token represents a unique mask token for this sentence and should start with <extra_id_0>, <extra_id_1>, up to <extra_id_99>. As a default, 100 sentinel tokens are available in [T5Tokenizer]. For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be processed as follows: thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids labels = tokenizer(" cute dog the ", return_tensors="pt").input_ids the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() 3.7837 If you're interested in pre-training T5 on a new corpus, check out the run_t5_mlm_flax.py script in the Examples directory. Supervised training In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping. Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar.", then they should be prepared for the model as follows: thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids labels = tokenizer("Das Haus ist wunderbar.", return_tensors="pt").input_ids the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() 0.2542 As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded target sequence). The model will automatically create the decoder_input_ids based on the labels, by shifting them one position to the right and prepending the config.decoder_start_token_id, which for T5 is equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used during T5's pre-training. However, the example above only shows a single training example. In practice, one trains deep learning models in batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one typically defines a max_source_length and max_target_length, which determine the maximum length of the input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on the task. In addition, we must make sure that padding token id's of the labels are not taken into account by the loss function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the ignore_index of the CrossEntropyLoss. In Flax, one can use the decoder_attention_mask to ignore padded tokens from the loss (see the Flax summarization script for details). We also pass attention_mask as additional input to the model, which makes sure that padding tokens of the inputs are ignored. The code example below illustrates all of this. thon from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") the following 2 hyperparameters are task-specific max_source_length = 512 max_target_length = 128 Suppose we have the following 2 training examples: input_sequence_1 = "Welcome to NYC" output_sequence_1 = "Bienvenue à NYC" input_sequence_2 = "HuggingFace is a company" output_sequence_2 = "HuggingFace est une entreprise" encode the inputs task_prefix = "translate English to French: " input_sequences = [input_sequence_1, input_sequence_2] encoding = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding="longest", max_length=max_source_length, truncation=True, return_tensors="pt", ) input_ids, attention_mask = encoding.input_ids, encoding.attention_mask encode the targets target_encoding = tokenizer( [output_sequence_1, output_sequence_2], padding="longest", max_length=max_target_length, truncation=True, return_tensors="pt", ) labels = target_encoding.input_ids replace padding token id's of the labels by -100 so it's ignored by the loss labels[labels == tokenizer.pad_token_id] = -100 forward pass loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss loss.item() 0.188 Additional training tips: T5 models need a slightly higher learning rate than the default one set in the Trainer when using the AdamW optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer. According to this forum post, task prefixes matter when (1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's pre-training mixture (see Appendix D of the paper for the task prefixes used). If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of pad_to_multiple_of to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is encountered during training thus significantly slowing down the training. only padding up to the longest example in a batch) leads to very slow training on TPU. Inference At inference time, it is recommended to use [~generation.GenerationMixin.generate]. This method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder and auto-regressively generates the decoder output. Check out this blog post to know all the details about generating text with Transformers. There's also this blog post which explains how generation works in general in encoder-decoder models. thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Das Haus ist wunderbar. Note that T5 uses the pad_token_id as the decoder_start_token_id, so when doing generation without using [~generation.GenerationMixin.generate], make sure you start it with the pad_token_id. The example above only shows a single example. You can also do batched inference, like so: thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") task_prefix = "translate English to German: " use different length sentences to test batching sentences = ["The house is wonderful.", "I like to work in NYC."] inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True) output_sequences = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], do_sample=False, # disable sampling to test if batching affects output ) print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True)) ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.'] Because T5 has been trained with the span-mask denoising objective, it can be used to predict the sentinel (masked-out) tokens during inference. The predicted tokens will then be placed between the sentinel tokens. thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids sequence_ids = model.generate(input_ids) sequences = tokenizer.batch_decode(sequence_ids) sequences [' park offers the park.'] Performance If you'd like a faster training and inference performance, install apex and then the model will automatically use apex.normalization.FusedRMSNorm instead of T5LayerNorm. The former uses an optimized fused kernel which is several times faster than the latter. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with T5. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A notebook for how to finetune T5 for classification and multiple choice. A notebook for how to finetune T5 for sentiment span extraction. 🌎 A notebook for how to finetune T5 for named entity recognition. 🌎 A notebook for Finetuning CodeT5 for generating docstrings from Ruby code. A notebook to Finetune T5-base-dutch to perform Dutch abstractive summarization on a TPU. A notebook for how to finetune T5 for summarization in PyTorch and track experiments with WandB. 🌎 A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker. [T5ForConditionalGeneration] is supported by this example script and notebook. [TFT5ForConditionalGeneration] is supported by this example script and notebook. [FlaxT5ForConditionalGeneration] is supported by this example script. Summarization chapter of the 🤗 Hugging Face course. Summarization task guide [FlaxT5ForConditionalGeneration] is supported by this example script for training T5 with a span-masked language model objective. The script also shows how to train a T5 tokenizer. [FlaxT5ForConditionalGeneration] is also supported by this notebook. [T5ForConditionalGeneration] is supported by this example script and notebook. [TFT5ForConditionalGeneration] is supported by this example script and notebook. Translation task guide A notebook on how to finetune T5 for question answering with TensorFlow 2. 🌎 A notebook on how to finetune T5 for question answering on a TPU. 🚀 Deploy - A blog post on how to deploy T5 11B for inference for less than $500. T5Config [[autodoc]] T5Config T5Tokenizer [[autodoc]] T5Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary T5TokenizerFast [[autodoc]] T5TokenizerFast T5Model [[autodoc]] T5Model - forward T5ForConditionalGeneration [[autodoc]] T5ForConditionalGeneration - forward T5EncoderModel [[autodoc]] T5EncoderModel - forward T5ForQuestionAnswering [[autodoc]] T5ForQuestionAnswering - forward TFT5Model [[autodoc]] TFT5Model - call TFT5ForConditionalGeneration [[autodoc]] TFT5ForConditionalGeneration - call TFT5EncoderModel [[autodoc]] TFT5EncoderModel - call FlaxT5Model [[autodoc]] FlaxT5Model - call - encode - decode FlaxT5ForConditionalGeneration [[autodoc]] FlaxT5ForConditionalGeneration - call - encode - decode FlaxT5EncoderModel [[autodoc]] FlaxT5EncoderModel - call
DETA Overview The DETA model was proposed in NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. DETA (short for Detection Transformers with Assignment) improves Deformable DETR by replacing the one-to-one bipartite Hungarian matching loss with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP. The abstract from the paper is the following: Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture. Tips: One can use [DetaImageProcessor] to prepare images and optional targets for the model. DETA overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA. Demo notebooks for DETA can be found here. See also: Object detection task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DetaConfig [[autodoc]] DetaConfig DetaImageProcessor [[autodoc]] DetaImageProcessor - preprocess - post_process_object_detection DetaModel [[autodoc]] DetaModel - forward DetaForObjectDetection [[autodoc]] DetaForObjectDetection - forward
Conditional DETR Overview The Conditional DETR model was proposed in Conditional DETR for Fast Training Convergence by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7× to 10× faster than DETR. The abstract from the paper is the following: The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR. Conditional DETR shows much faster convergence compared to the original DETR. Taken from the original paper. This model was contributed by DepuMeng. The original code can be found here. Documentation resources Object detection task guide ConditionalDetrConfig [[autodoc]] ConditionalDetrConfig ConditionalDetrImageProcessor [[autodoc]] ConditionalDetrImageProcessor - preprocess - post_process_object_detection - post_process_instance_segmentation - post_process_semantic_segmentation - post_process_panoptic_segmentation ConditionalDetrFeatureExtractor [[autodoc]] ConditionalDetrFeatureExtractor - call - post_process_object_detection - post_process_instance_segmentation - post_process_semantic_segmentation - post_process_panoptic_segmentation ConditionalDetrModel [[autodoc]] ConditionalDetrModel - forward ConditionalDetrForObjectDetection [[autodoc]] ConditionalDetrForObjectDetection - forward ConditionalDetrForSegmentation [[autodoc]] ConditionalDetrForSegmentation - forward
CLAP Overview The CLAP model was proposed in Large Scale Contrastive Language-Audio pretraining with feature fusion and keyword-to-caption augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score. The abstract from the paper is the following: Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-6 This model was contributed by Younes Belkada and Arthur Zucker . The original code can be found here. ClapConfig [[autodoc]] ClapConfig - from_text_audio_configs ClapTextConfig [[autodoc]] ClapTextConfig ClapAudioConfig [[autodoc]] ClapAudioConfig ClapFeatureExtractor [[autodoc]] ClapFeatureExtractor ClapProcessor [[autodoc]] ClapProcessor ClapModel [[autodoc]] ClapModel - forward - get_text_features - get_audio_features ClapTextModel [[autodoc]] ClapTextModel - forward ClapTextModelWithProjection [[autodoc]] ClapTextModelWithProjection - forward ClapAudioModel [[autodoc]] ClapAudioModel - forward ClapAudioModelWithProjection [[autodoc]] ClapAudioModelWithProjection - forward
DiT Overview DiT was proposed in DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. DiT applies the self-supervised objective of BEiT (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including: document image classification: the RVL-CDIP dataset (a collection of 400,000 images belonging to one of 16 classes). document layout analysis: the PubLayNet dataset (a collection of more than 360,000 document images constructed by automatically parsing PubMed XML files). table detection: the ICDAR 2019 cTDaR dataset (a collection of 600 training images and 240 testing images). The abstract from the paper is the following: *Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55). * Summary of the approach. Taken from the original paper. One can directly use the weights of DiT with the AutoModel API: thon from transformers import AutoModel model = AutoModel.from_pretrained("microsoft/dit-base") This will load the model pre-trained on masked image modeling. Note that this won't include the language modeling head on top, used to predict visual tokens. To include the head, you can load the weights into a BeitForMaskedImageModeling model, like so: thon from transformers import BeitForMaskedImageModeling model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base") You can also load a fine-tuned model from the hub, like so: thon from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip") This particular checkpoint was fine-tuned on RVL-CDIP, an important benchmark for document image classification. A notebook that illustrates inference for document image classification can be found here. As DiT's architecture is equivalent to that of BEiT, one can refer to BEiT's documentation page for all tips, code examples and notebooks. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT. [BeitForImageClassification] is supported by this example script and notebook. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
OpenAI GPT2 Overview OpenAI GPT-2 model was proposed in Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from OpenAI. It's a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. The abstract from the paper is the following: GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. Tips: GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script. The model can take the past_key_values (for PyTorch) or past (for TF) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the [GPT2Model.forward] method, or for TF the past argument of the [TFGPT2Model.call] method for more information on its usage. Enabling the scale_attn_by_inverse_layer_idx and reorder_and_upcast_attn flags will apply the training stability improvements from Mistral (for PyTorch only). Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: distilgpt-2. This model was contributed by thomwolf. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog on how to Finetune a non-English GPT-2 Model with Hugging Face. A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2. A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model. A blog on Faster Text Generation with TensorFlow and XLA with GPT-2. A blog on How to train a Language Model with Megatron-LM with a GPT-2 model. A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎 A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎 Causal language modeling chapter of the 🤗 Hugging Face Course. [GPT2LMHeadModel] is supported by this causal language modeling example script, text generation example script, and notebook. [TFGPT2LMHeadModel] is supported by this causal language modeling example script and notebook. [FlaxGPT2LMHeadModel] is supported by this causal language modeling example script and notebook. Text classification task guide Token classification task guide Causal language modeling task guide GPT2Config [[autodoc]] GPT2Config GPT2Tokenizer [[autodoc]] GPT2Tokenizer - save_vocabulary GPT2TokenizerFast [[autodoc]] GPT2TokenizerFast GPT2 specific outputs [[autodoc]] models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput [[autodoc]] models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput GPT2Model [[autodoc]] GPT2Model - forward GPT2LMHeadModel [[autodoc]] GPT2LMHeadModel - forward GPT2DoubleHeadsModel [[autodoc]] GPT2DoubleHeadsModel - forward GPT2ForQuestionAnswering [[autodoc]] GPT2ForQuestionAnswering - forward GPT2ForSequenceClassification [[autodoc]] GPT2ForSequenceClassification - forward GPT2ForTokenClassification [[autodoc]] GPT2ForTokenClassification - forward TFGPT2Model [[autodoc]] TFGPT2Model - call TFGPT2LMHeadModel [[autodoc]] TFGPT2LMHeadModel - call TFGPT2DoubleHeadsModel [[autodoc]] TFGPT2DoubleHeadsModel - call TFGPT2ForSequenceClassification [[autodoc]] TFGPT2ForSequenceClassification - call TFSequenceClassifierOutputWithPast [[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutputWithPast TFGPT2Tokenizer [[autodoc]] TFGPT2Tokenizer FlaxGPT2Model [[autodoc]] FlaxGPT2Model - call FlaxGPT2LMHeadModel [[autodoc]] FlaxGPT2LMHeadModel - call
RetriBERT This model is in maintenance mode only, so we won't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: pip install -U transformers==4.30.0. Overview The RetriBERT model was proposed in the blog post Explain Anything Like I'm Five: A Model for Open Domain Long Form Question Answering. RetriBERT is a small model that uses either a single or pair of BERT encoders with lower-dimension projection for dense semantic indexing of text. This model was contributed by yjernite. Code to train and use the model can be found here. RetriBertConfig [[autodoc]] RetriBertConfig RetriBertTokenizer [[autodoc]] RetriBertTokenizer RetriBertTokenizerFast [[autodoc]] RetriBertTokenizerFast RetriBertModel [[autodoc]] RetriBertModel - forward
XLM-RoBERTa Overview The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. The abstract from the paper is the following: This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available. Tips: XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require lang tensors to understand which language is used, and should be able to determine the correct language from the input ids. Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language. This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs. This model was contributed by stefan-it. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on how to finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS [XLMRobertaForSequenceClassification] is supported by this example script and notebook. [TFXLMRobertaForSequenceClassification] is supported by this example script and notebook. [FlaxXLMRobertaForSequenceClassification] is supported by this example script and notebook. Text classification chapter of the 🤗 Hugging Face Task Guides. Text classification task guide [XLMRobertaForTokenClassification] is supported by this example script and notebook. [TFXLMRobertaForTokenClassification] is supported by this example script and notebook. [FlaxXLMRobertaForTokenClassification] is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Token classification task guide [XLMRobertaForCausalLM] is supported by this example script and notebook. Causal language modeling chapter of the 🤗 Hugging Face Task Guides. Causal language modeling task guide [XLMRobertaForMaskedLM] is supported by this example script and notebook. [TFXLMRobertaForMaskedLM] is supported by this example script and notebook. [FlaxXLMRobertaForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling [XLMRobertaForQuestionAnswering] is supported by this example script and notebook. [TFXLMRobertaForQuestionAnswering] is supported by this example script and notebook. [FlaxXLMRobertaForQuestionAnswering] is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide Multiple choice [XLMRobertaForMultipleChoice] is supported by this example script and notebook. [TFXLMRobertaForMultipleChoice] is supported by this example script and notebook. Multiple choice task guide 🚀 Deploy A blog post on how to Deploy Serverless XLM RoBERTa on AWS Lambda. XLMRobertaConfig [[autodoc]] XLMRobertaConfig XLMRobertaTokenizer [[autodoc]] XLMRobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary XLMRobertaTokenizerFast [[autodoc]] XLMRobertaTokenizerFast XLMRobertaModel [[autodoc]] XLMRobertaModel - forward XLMRobertaForCausalLM [[autodoc]] XLMRobertaForCausalLM - forward XLMRobertaForMaskedLM [[autodoc]] XLMRobertaForMaskedLM - forward XLMRobertaForSequenceClassification [[autodoc]] XLMRobertaForSequenceClassification - forward XLMRobertaForMultipleChoice [[autodoc]] XLMRobertaForMultipleChoice - forward XLMRobertaForTokenClassification [[autodoc]] XLMRobertaForTokenClassification - forward XLMRobertaForQuestionAnswering [[autodoc]] XLMRobertaForQuestionAnswering - forward TFXLMRobertaModel [[autodoc]] TFXLMRobertaModel - call TFXLMRobertaForCausalLM [[autodoc]] TFXLMRobertaForCausalLM - call TFXLMRobertaForMaskedLM [[autodoc]] TFXLMRobertaForMaskedLM - call TFXLMRobertaForSequenceClassification [[autodoc]] TFXLMRobertaForSequenceClassification - call TFXLMRobertaForMultipleChoice [[autodoc]] TFXLMRobertaForMultipleChoice - call TFXLMRobertaForTokenClassification [[autodoc]] TFXLMRobertaForTokenClassification - call TFXLMRobertaForQuestionAnswering [[autodoc]] TFXLMRobertaForQuestionAnswering - call FlaxXLMRobertaModel [[autodoc]] FlaxXLMRobertaModel - call FlaxXLMRobertaForCausalLM [[autodoc]] FlaxXLMRobertaForCausalLM - call FlaxXLMRobertaForMaskedLM [[autodoc]] FlaxXLMRobertaForMaskedLM - call FlaxXLMRobertaForSequenceClassification [[autodoc]] FlaxXLMRobertaForSequenceClassification - call FlaxXLMRobertaForMultipleChoice [[autodoc]] FlaxXLMRobertaForMultipleChoice - call FlaxXLMRobertaForTokenClassification [[autodoc]] FlaxXLMRobertaForTokenClassification - call FlaxXLMRobertaForQuestionAnswering [[autodoc]] FlaxXLMRobertaForQuestionAnswering - call
ViTMSN Overview The ViTMSN model was proposed in Masked Siamese Networks for Label-Efficient Learning by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot regimes. The abstract from the paper is the following: We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Tips: MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images. The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset, use the [ViTMSNForImageClassification] class which is initialized from [ViTMSNModel]. Follow this notebook for a detailed tutorial on fine-tuning. MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K labels when fine-tuned. MSN architecture. Taken from the original paper. This model was contributed by sayakpaul. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN. [ViTMSNForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ViTMSNConfig [[autodoc]] ViTMSNConfig ViTMSNModel [[autodoc]] ViTMSNModel - forward ViTMSNForImageClassification [[autodoc]] ViTMSNForImageClassification - forward
Time Series Transformer This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a Github Issue. Overview The Time Series Transformer model is a vanilla encoder-decoder Transformer for time series forecasting. Tips: Similar to other models in the library, [TimeSeriesTransformerModel] is the raw Transformer without any head on top, and [TimeSeriesTransformerForPrediction] adds a distribution head on top of the former, which can be used for time-series forecasting. Note that this is a so-called probabilistic forecasting model, not a point forecasting model. This means that the model learns a distribution, from which one can sample. The model doesn't directly output values. [TimeSeriesTransformerForPrediction] consists of 2 blocks: an encoder, which takes a context_length of time series values as input (called past_values), and a decoder, which predicts a prediction_length of time series values into the future (called future_values). During training, one needs to provide pairs of (past_values and future_values) to the model. In addition to the raw (past_values and future_values), one typically provides additional features to the model. These can be the following: past_time_features: temporal features which the model will add to past_values. These serve as "positional encodings" for the Transformer encoder. Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector). e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year"). future_time_features: temporal features which the model will add to future_values. These serve as "positional encodings" for the Transformer decoder. Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector). e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year"). static_categorical_features: categorical features which are static over time (i.e., have the same value for all past_values and future_values). An example here is the store ID or region ID that identifies a given time-series. Note that these features need to be known for ALL data points (also those in the future). static_real_features: real-valued features which are static over time (i.e., have the same value for all past_values and future_values). An example here is the image representation of the product for which you have the time-series values (like the ResNet embedding of a "shoe" picture, if your time-series is about the sales of shoes). Note that these features need to be known for ALL data points (also those in the future). The model is trained using "teacher-forcing", similar to how a Transformer is trained for machine translation. This means that, during training, one shifts the future_values one position to the right as input to the decoder, prepended by the last value of past_values. At each time step, the model needs to predict the next target. So the set-up of training is similar to a GPT model for language, except that there's no notion of decoder_start_token_id (we just use the last value of the context as initial input for the decoder). At inference time, we give the final value of the past_values as input to the decoder. Next, we can sample from the model to make a prediction at the next time step, which is then fed to the decoder in order to make the next prediction (also called autoregressive generation). This model was contributed by kashif. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Check out the Time Series Transformer blog-post in HuggingFace blog: Probabilistic Time Series Forecasting with 🤗 Transformers TimeSeriesTransformerConfig [[autodoc]] TimeSeriesTransformerConfig TimeSeriesTransformerModel [[autodoc]] TimeSeriesTransformerModel - forward TimeSeriesTransformerForPrediction [[autodoc]] TimeSeriesTransformerForPrediction - forward
FLAVA Overview The FLAVA model was proposed in FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022. The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks. The abstract from the paper is the following: State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a "foundation", that targets all modalities at once -- a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities. This model was contributed by aps. The original code can be found here. FlavaConfig [[autodoc]] FlavaConfig FlavaTextConfig [[autodoc]] FlavaTextConfig FlavaImageConfig [[autodoc]] FlavaImageConfig FlavaMultimodalConfig [[autodoc]] FlavaMultimodalConfig FlavaImageCodebookConfig [[autodoc]] FlavaImageCodebookConfig FlavaProcessor [[autodoc]] FlavaProcessor FlavaFeatureExtractor [[autodoc]] FlavaFeatureExtractor FlavaImageProcessor [[autodoc]] FlavaImageProcessor - preprocess FlavaForPreTraining [[autodoc]] FlavaForPreTraining - forward FlavaModel [[autodoc]] FlavaModel - forward - get_text_features - get_image_features FlavaImageCodebook [[autodoc]] FlavaImageCodebook - forward - get_codebook_indices - get_codebook_probs FlavaTextModel [[autodoc]] FlavaTextModel - forward FlavaImageModel [[autodoc]] FlavaImageModel - forward FlavaMultimodalModel [[autodoc]] FlavaMultimodalModel - forward
GPT-NeoX Overview We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding, mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source the training and evaluation code, as well as the model weights, at https://github.com/EleutherAI/gpt-neox. Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with generous the support of CoreWeave. GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows: python model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda() GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation. Generation The generate() method can be used to generate text using GPT Neo model. thon from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b") tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b") prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI." input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] Documentation resources Causal language modeling task guide GPTNeoXConfig [[autodoc]] GPTNeoXConfig GPTNeoXTokenizerFast [[autodoc]] GPTNeoXTokenizerFast GPTNeoXModel [[autodoc]] GPTNeoXModel - forward GPTNeoXForCausalLM [[autodoc]] GPTNeoXForCausalLM - forward GPTNeoXForQuestionAnswering [[autodoc]] GPTNeoXForQuestionAnswering - forward GPTNeoXForSequenceClassification [[autodoc]] GPTNeoXForSequenceClassification - forward GPTNeoXForTokenClassification [[autodoc]] GPTNeoXForTokenClassification - forward
BARTpho Overview The BARTpho model was proposed in BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. The abstract from the paper is the following: We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks. Example of use: thon import torch from transformers import AutoModel, AutoTokenizer bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable") tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable") line = "Chúng tôi là những nghiên cứu viên." input_ids = tokenizer(line, return_tensors="pt") with torch.no_grad(): features = bartpho(**input_ids) # Models outputs are now tuples With TensorFlow 2.0+: from transformers import TFAutoModel bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable") input_ids = tokenizer(line, return_tensors="tf") features = bartpho(**input_ids) Tips: Following mBART, BARTpho uses the "large" architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the documentation of BART, when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example: thon from transformers import MBartForConditionalGeneration bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable") TXT = "Chúng tôi là nghiên cứu viên." input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] logits = bartpho(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) print(tokenizer.decode(predictions).split()) This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file". This model was contributed by dqnguyen. The original code can be found here. BartphoTokenizer [[autodoc]] BartphoTokenizer
Audio Spectrogram Transformer Overview The Audio Spectrogram Transformer model was proposed in AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass. The Audio Spectrogram Transformer applies a Vision Transformer to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results for audio classification. The abstract from the paper is the following: In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2. Tips: When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it's recommended to take care of the input normalization (to make sure the input has mean of 0 and std of 0.5). [ASTFeatureExtractor] takes care of this. Note that it uses the AudioSet mean and std by default. You can check ast/src/get_norm_stats.py to see how the authors compute the stats for a downstream dataset. Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the PSLA paper) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task. Audio pectrogram Transformer architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer. A notebook illustrating inference with AST for audio classification can be found here. [ASTForAudioClassification] is supported by this example script and notebook. See also: Audio classification. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ASTConfig [[autodoc]] ASTConfig ASTFeatureExtractor [[autodoc]] ASTFeatureExtractor - call ASTModel [[autodoc]] ASTModel - forward ASTForAudioClassification [[autodoc]] ASTForAudioClassification - forward
mT5 Overview The mT5 model was presented in mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The abstract from the paper is the following: The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: google/mt5-small google/mt5-base google/mt5-large google/mt5-xl google/mt5-xxl. This model was contributed by patrickvonplaten. The original code can be found here. Documentation resources Translation task guide Summarization task guide MT5Config [[autodoc]] MT5Config MT5Tokenizer [[autodoc]] MT5Tokenizer See [T5Tokenizer] for all details. MT5TokenizerFast [[autodoc]] MT5TokenizerFast See [T5TokenizerFast] for all details. MT5Model [[autodoc]] MT5Model MT5ForConditionalGeneration [[autodoc]] MT5ForConditionalGeneration MT5EncoderModel [[autodoc]] MT5EncoderModel MT5ForQuestionAnswering [[autodoc]] MT5ForQuestionAnswering TFMT5Model [[autodoc]] TFMT5Model TFMT5ForConditionalGeneration [[autodoc]] TFMT5ForConditionalGeneration TFMT5EncoderModel [[autodoc]] TFMT5EncoderModel FlaxMT5Model [[autodoc]] FlaxMT5Model FlaxMT5ForConditionalGeneration [[autodoc]] FlaxMT5ForConditionalGeneration FlaxMT5EncoderModel [[autodoc]] FlaxMT5EncoderModel
Longformer Overview The Longformer model was presented in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan. The abstract from the paper is the following: Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. Tips: Since the Longformer is based on RoBERTa, it doesn't have token_type_ids. You don't need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or </s>). A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the local attention section for more information. This model was contributed by beltagy. The Authors' code can be found here. Longformer Self Attention Longformer self attention employs self attention on both a "local" context and a "global" context. Most tokens only attend "locally" to each other meaning that each token attends to its \(\frac{1}{2} w\) previous tokens and \(\frac{1}{2} w\) succeeding tokens with \(w\) being the window length as defined in config.attention_window. Note that config.attention_window can be of type List to define a different \(w\) for each layer. A selected few tokens attend "globally" to all other tokens, as it is conventionally done for all tokens in BertSelfAttention. Note that "locally" and "globally" attending tokens are projected by different query, key and value matrices. Also note that every "locally" attending token not only attends to tokens within its window \(w\), but also to all "globally" attending tokens so that global attention is symmetric. The user can define which tokens attend "locally" and which tokens attend "globally" by setting the tensor global_attention_mask at run-time appropriately. All Longformer models employ the following logic for global_attention_mask: 0: the token attends "locally", 1: the token attends "globally". For more information please also refer to [~LongformerModel.forward] method. Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually represents the memory and time bottleneck, can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times w)\), with \(n_s\) being the sequence length and \(w\) being the average window size. It is assumed that the number of "globally" attending tokens is insignificant as compared to the number of "locally" attending tokens. For more information, please refer to the official paper. Training [LongformerForMaskedLM] is trained the exact same way [RobertaForMaskedLM] is trained and should be used as follows: thon input_ids = tokenizer.encode("This is a sentence from [MASK] training data", return_tensors="pt") mlm_labels = tokenizer.encode("This is a sentence from the training data", return_tensors="pt") loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0] Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide LongformerConfig [[autodoc]] LongformerConfig LongformerTokenizer [[autodoc]] LongformerTokenizer LongformerTokenizerFast [[autodoc]] LongformerTokenizerFast Longformer specific outputs [[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutput [[autodoc]] models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling [[autodoc]] models.longformer.modeling_longformer.LongformerMaskedLMOutput [[autodoc]] models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput [[autodoc]] models.longformer.modeling_longformer.LongformerSequenceClassifierOutput [[autodoc]] models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput [[autodoc]] models.longformer.modeling_longformer.LongformerTokenClassifierOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutputWithPooling [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput [[autodoc]] models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput LongformerModel [[autodoc]] LongformerModel - forward LongformerForMaskedLM [[autodoc]] LongformerForMaskedLM - forward LongformerForSequenceClassification [[autodoc]] LongformerForSequenceClassification - forward LongformerForMultipleChoice [[autodoc]] LongformerForMultipleChoice - forward LongformerForTokenClassification [[autodoc]] LongformerForTokenClassification - forward LongformerForQuestionAnswering [[autodoc]] LongformerForQuestionAnswering - forward TFLongformerModel [[autodoc]] TFLongformerModel - call TFLongformerForMaskedLM [[autodoc]] TFLongformerForMaskedLM - call TFLongformerForQuestionAnswering [[autodoc]] TFLongformerForQuestionAnswering - call TFLongformerForSequenceClassification [[autodoc]] TFLongformerForSequenceClassification - call TFLongformerForTokenClassification [[autodoc]] TFLongformerForTokenClassification - call TFLongformerForMultipleChoice [[autodoc]] TFLongformerForMultipleChoice - call
ErnieM Overview The ErnieM model was proposed in ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. The abstract from the paper is the following: Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks. Tips: Ernie-M is a BERT-like model so it is a stacked Transformer Encoder. Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: Cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. For now these two LMHead objectives are not implemented here. It is a multilingual language model. Next Sentence Prediction was not used in pretraining process. This model was contributed by Susnato Dhar. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Multiple choice task guide ErnieMConfig [[autodoc]] ErnieMConfig ErnieMTokenizer [[autodoc]] ErnieMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ErnieMModel [[autodoc]] ErnieMModel - forward ErnieMForSequenceClassification [[autodoc]] ErnieMForSequenceClassification - forward ErnieMForMultipleChoice [[autodoc]] ErnieMForMultipleChoice - forward ErnieMForTokenClassification [[autodoc]] ErnieMForTokenClassification - forward ErnieMForQuestionAnswering [[autodoc]] ErnieMForQuestionAnswering - forward ErnieMForInformationExtraction [[autodoc]] ErnieMForInformationExtraction - forward
X-CLIP Overview The X-CLIP model was proposed in Expanding Language-Image Pretrained Models for General Video Recognition by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. X-CLIP is a minimal extension of CLIP for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator. The abstract from the paper is the following: Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable "zero-shot" generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited. Tips: Usage of X-CLIP is identical to CLIP. X-CLIP architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP. Demo notebooks for X-CLIP can be found here. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. XCLIPProcessor [[autodoc]] XCLIPProcessor XCLIPConfig [[autodoc]] XCLIPConfig - from_text_vision_configs XCLIPTextConfig [[autodoc]] XCLIPTextConfig XCLIPVisionConfig [[autodoc]] XCLIPVisionConfig XCLIPModel [[autodoc]] XCLIPModel - forward - get_text_features - get_video_features XCLIPTextModel [[autodoc]] XCLIPTextModel - forward XCLIPVisionModel [[autodoc]] XCLIPVisionModel - forward
Whisper Overview The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. The abstract from the paper is the following: We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing. Tips: The model usually performs well without requiring any finetuning. The architecture follows a classic encoder-decoder architecture, which means that it relies on the [~generation.GenerationMixin.generate] function for inference. Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release. One can use [WhisperProcessor] to prepare audio for the model, and decode the predicted ID's back into text. This model was contributed by Arthur Zucker. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here. WhisperConfig [[autodoc]] WhisperConfig WhisperTokenizer [[autodoc]] WhisperTokenizer - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary WhisperTokenizerFast [[autodoc]] WhisperTokenizerFast - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary WhisperFeatureExtractor [[autodoc]] WhisperFeatureExtractor - call WhisperProcessor [[autodoc]] WhisperProcessor - call - from_pretrained - save_pretrained - batch_decode - decode WhisperModel [[autodoc]] WhisperModel - forward - _mask_input_features WhisperForConditionalGeneration [[autodoc]] WhisperForConditionalGeneration - forward WhisperForAudioClassification [[autodoc]] WhisperForAudioClassification - forward TFWhisperModel [[autodoc]] TFWhisperModel - call TFWhisperForConditionalGeneration [[autodoc]] TFWhisperForConditionalGeneration - call FlaxWhisperModel [[autodoc]] FlaxWhisperModel - call FlaxWhisperForConditionalGeneration [[autodoc]] FlaxWhisperForConditionalGeneration - call FlaxWhisperForAudioClassification [[autodoc]] FlaxWhisperForAudioClassification - call
BLIP Overview The BLIP model was proposed in BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. BLIP is a model that is able to perform various multi-modal tasks including - Visual Question Answering - Image-Text retrieval (Image-text matching) - Image Captioning The abstract from the paper is the following: Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released. This model was contributed by ybelkada. The original code can be found here. Resources Jupyter notebook on how to fine-tune BLIP for image captioning on a custom dataset BlipConfig [[autodoc]] BlipConfig - from_text_vision_configs BlipTextConfig [[autodoc]] BlipTextConfig BlipVisionConfig [[autodoc]] BlipVisionConfig BlipProcessor [[autodoc]] BlipProcessor BlipImageProcessor [[autodoc]] BlipImageProcessor - preprocess BlipModel [[autodoc]] BlipModel - forward - get_text_features - get_image_features BlipTextModel [[autodoc]] BlipTextModel - forward BlipVisionModel [[autodoc]] BlipVisionModel - forward BlipForConditionalGeneration [[autodoc]] BlipForConditionalGeneration - forward BlipForImageTextRetrieval [[autodoc]] BlipForImageTextRetrieval - forward BlipForQuestionAnswering [[autodoc]] BlipForQuestionAnswering - forward TFBlipModel [[autodoc]] TFBlipModel - call - get_text_features - get_image_features TFBlipTextModel [[autodoc]] TFBlipTextModel - call TFBlipVisionModel [[autodoc]] TFBlipVisionModel - call TFBlipForConditionalGeneration [[autodoc]] TFBlipForConditionalGeneration - call TFBlipForImageTextRetrieval [[autodoc]] TFBlipForImageTextRetrieval - call TFBlipForQuestionAnswering [[autodoc]] TFBlipForQuestionAnswering - call
Decision Transformer Overview The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. The abstract from the paper is the following: We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. Tips: This version of the model is for tasks where the state is a vector, image-based states will come soon. This model was contributed by edbeeching. The original code can be found here. DecisionTransformerConfig [[autodoc]] DecisionTransformerConfig DecisionTransformerGPT2Model [[autodoc]] DecisionTransformerGPT2Model - forward DecisionTransformerModel [[autodoc]] DecisionTransformerModel - forward
LeViT Overview The LeViT model was proposed in LeViT: Introducing Convolutions to Vision Transformers by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. LeViT improves the Vision Transformer (ViT) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information. The abstract from the paper is the following: *We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. * LeViT Architecture. Taken from the original paper. Tips: Compared to ViT, LeViT models use an additional distillation head to effectively learn from a teacher (which, in the LeViT paper, is a ResNet like-model). The distillation head is learned through backpropagation under supervision of a ResNet like-model. They also draw inspiration from convolution neural networks to use activation maps with decreasing resolutions to increase the efficiency. There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state and not using the distillation head, or (2) by placing both a prediction head and distillation head on top of the final hidden state. In that case, the prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to [LevitForImageClassification] and (2) corresponds to [LevitForImageClassificationWithTeacher]. All released checkpoints were pre-trained and fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. The authors of LeViT released 5 trained LeViT models, which you can directly plug into [LevitModel] or [LevitForImageClassification]. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224): facebook/levit-128S, facebook/levit-128, facebook/levit-192, facebook/levit-256 and facebook/levit-384. Note that one should use [LevitImageProcessor] in order to prepare images for the model. [LevitForImageClassificationWithTeacher] currently supports only inference and not training or fine-tuning. You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [LevitImageProcessor] and [ViTForImageClassification] by [LevitForImageClassification] or [LevitForImageClassificationWithTeacher]). This model was contributed by anugunj. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LeViT. [LevitForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. LevitConfig [[autodoc]] LevitConfig LevitFeatureExtractor [[autodoc]] LevitFeatureExtractor - call LevitImageProcessor [[autodoc]] LevitImageProcessor - preprocess LevitModel [[autodoc]] LevitModel - forward LevitForImageClassification [[autodoc]] LevitForImageClassification - forward LevitForImageClassificationWithTeacher [[autodoc]] LevitForImageClassificationWithTeacher - forward
Speech2Text Overview The Speech2Text model was proposed in fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST: LibriSpeech, CoVoST 2, MuST-C. This model was contributed by valhalla. The original code can be found here. Inference Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The generate() method can be used for inference. The [Speech2TextFeatureExtractor] class is responsible for extracting the log-mel filter-bank features. The [Speech2TextProcessor] wraps [Speech2TextFeatureExtractor] and [Speech2TextTokenizer] into a single instance to both extract the input features and decode the predicted token ids. The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to install those packages before running the examples. You could either install those as extra speech dependencies with pip install transformers"[speech, sentencepiece]" or install the packages separately with pip install torchaudio sentencepiece. Also torchaudio requires the development version of the libsndfile package which can be installed via a system package manager. On Ubuntu it can be installed as follows: apt install libsndfile1-dev ASR and Speech Translation thon import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) transcription ['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'] Multilingual speech translation For multilingual speech translation models, eos_token_id is used as the decoder_start_token_id and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the forced_bos_token_id parameter to the generate() method. The following example shows how to transate English speech to French text using the facebook/s2t-medium-mustc-multilingual-st checkpoint. thon import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") generated_ids = model.generate( inputs["input_features"], attention_mask=inputs["attention_mask"], forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"], ) translation = processor.batch_decode(generated_ids, skip_special_tokens=True) translation ["(Vidéo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'être accueillis dans son évangile."] See the model hub to look for Speech2Text checkpoints. Speech2TextConfig [[autodoc]] Speech2TextConfig Speech2TextTokenizer [[autodoc]] Speech2TextTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary Speech2TextFeatureExtractor [[autodoc]] Speech2TextFeatureExtractor - call Speech2TextProcessor [[autodoc]] Speech2TextProcessor - call - from_pretrained - save_pretrained - batch_decode - decode Speech2TextModel [[autodoc]] Speech2TextModel - forward Speech2TextForConditionalGeneration [[autodoc]] Speech2TextForConditionalGeneration - forward TFSpeech2TextModel [[autodoc]] TFSpeech2TextModel - call TFSpeech2TextForConditionalGeneration [[autodoc]] TFSpeech2TextForConditionalGeneration - call
TrOCR Overview The TrOCR model was proposed in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. TrOCR consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform optical character recognition (OCR). The abstract from the paper is the following: Text recognition is a long-standing research problem for document digitalization. Existing approaches for text recognition are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments show that the TrOCR model outperforms the current state-of-the-art models on both printed and handwritten text recognition tasks. TrOCR architecture. Taken from the original paper. Please refer to the [VisionEncoderDecoder] class on how to use this model. This model was contributed by nielsr. The original code can be found here. Tips: The quickest way to get started with TrOCR is by checking the tutorial notebooks, which show how to use the model at inference time as well as fine-tuning on custom data. TrOCR is pre-trained in 2 stages before being fine-tuned on downstream datasets. It achieves state-of-the-art results on both printed (e.g. the SROIE dataset and handwritten (e.g. the IAM Handwriting dataset text recognition tasks. For more information, see the official models. TrOCR is always used within the VisionEncoderDecoder framework. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with TrOCR. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on Accelerating Document AI with TrOCR. A blog post on how to Document AI with TrOCR. A notebook on how to finetune TrOCR on IAM Handwriting Database using Seq2SeqTrainer. A notebook on inference with TrOCR and Gradio demo. A notebook on finetune TrOCR on the IAM Handwriting Database using native PyTorch. A notebook on evaluating TrOCR on the IAM test set. Casual language modeling task guide. ⚡️ Inference An interactive-demo on TrOCR handwritten character recognition. Inference TrOCR's [VisionEncoderDecoder] model accepts images as input and makes use of [~generation.GenerationMixin.generate] to autoregressively generate text given the input image. The [ViTImageProcessor/DeiTImageProcessor] class is responsible for preprocessing the input image and [RobertaTokenizer/XLMRobertaTokenizer] decodes the generated target tokens to the target string. The [TrOCRProcessor] wraps [ViTImageProcessor/DeiTImageProcessor] and [RobertaTokenizer/XLMRobertaTokenizer] into a single instance to both extract the input features and decode the predicted token ids. Step-by-step Optical Character Recognition (OCR) ``` py from transformers import TrOCRProcessor, VisionEncoderDecoderModel import requests from PIL import Image processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") load image from the IAM dataset url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] See the model hub to look for TrOCR checkpoints. TrOCRConfig [[autodoc]] TrOCRConfig TrOCRProcessor [[autodoc]] TrOCRProcessor - call - from_pretrained - save_pretrained - batch_decode - decode TrOCRForCausalLM [[autodoc]] TrOCRForCausalLM - forward
OpenAI GPT Overview OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It's a causal (unidirectional) transformer pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus. The abstract from the paper is the following: Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. Tips: GPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script. Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT is one of them. This model was contributed by thomwolf. The original code can be found here. Note: If you want to reproduce the original tokenization process of the OpenAI GPT paper, you will need to install ftfy and SpaCy: pip install spacy ftfy==4.4.3 python -m spacy download en If you don't install ftfy and SpaCy, the [OpenAIGPTTokenizer] will default to tokenize using BERT's BasicTokenizer followed by Byte-Pair Encoding (which should be fine for most usage, don't worry). Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OpenAI GPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on outperforming OpenAI GPT-3 with SetFit for text-classification. See also: Text classification task guide A blog on how to Finetune a non-English GPT-2 Model with Hugging Face. A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2. A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model. A blog on Faster Text Generation with TensorFlow and XLA with GPT-2. A blog on How to train a Language Model with Megatron-LM with a GPT-2 model. A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎 A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎 Causal language modeling chapter of the 🤗 Hugging Face Course. [OpenAIGPTLMHeadModel] is supported by this causal language modeling example script, text generation example script and notebook. [TFOpenAIGPTLMHeadModel] is supported by this causal language modeling example script and notebook. See also: Causal language modeling task guide A course material on Byte-Pair Encoding tokenization. OpenAIGPTConfig [[autodoc]] OpenAIGPTConfig OpenAIGPTTokenizer [[autodoc]] OpenAIGPTTokenizer - save_vocabulary OpenAIGPTTokenizerFast [[autodoc]] OpenAIGPTTokenizerFast OpenAI specific outputs [[autodoc]] models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput [[autodoc]] models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput OpenAIGPTModel [[autodoc]] OpenAIGPTModel - forward OpenAIGPTLMHeadModel [[autodoc]] OpenAIGPTLMHeadModel - forward OpenAIGPTDoubleHeadsModel [[autodoc]] OpenAIGPTDoubleHeadsModel - forward OpenAIGPTForSequenceClassification [[autodoc]] OpenAIGPTForSequenceClassification - forward TFOpenAIGPTModel [[autodoc]] TFOpenAIGPTModel - call TFOpenAIGPTLMHeadModel [[autodoc]] TFOpenAIGPTLMHeadModel - call TFOpenAIGPTDoubleHeadsModel [[autodoc]] TFOpenAIGPTDoubleHeadsModel - call TFOpenAIGPTForSequenceClassification [[autodoc]] TFOpenAIGPTForSequenceClassification - call