text
stringlengths 2
11.8k
|
---|
AutoImageProcessor
For vision tasks, an image processor processes the image into the correct input format.
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
AutoBackbone
A Swin backbone with multiple stages for outputting a feature map. |
AutoBackbone
A Swin backbone with multiple stages for outputting a feature map.
The [AutoBackbone] lets you use pretrained models as backbones to get feature maps from different stages of the backbone. You should specify one of the following parameters in [~PretrainedConfig.from_pretrained]:
out_indices is the index of the layer you'd like to get the feature map from
out_features is the name of the layer you'd like to get the feature map from |
out_indices is the index of the layer you'd like to get the feature map from
out_features is the name of the layer you'd like to get the feature map from
These parameters can be used interchangeably, but if you use both, make sure they're aligned with each other! If you don't pass any of these parameters, the backbone returns the feature map from the last layer.
A feature map from the first stage of the backbone. The patch partition refers to the model stem. |
A feature map from the first stage of the backbone. The patch partition refers to the model stem.
For example, in the above diagram, to return the feature map from the first stage of the Swin backbone, you can set out_indices=(1,): |
from transformers import AutoImageProcessor, AutoBackbone
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = AutoBackbone.from_pretrained("microsoft/swin-tiny-patch4-window7-224", out_indices=(1,))
inputs = processor(image, return_tensors="pt")
outputs = model(**inputs)
feature_maps = outputs.feature_maps |
Now you can access the feature_maps object from the first stage of the backbone:
list(feature_maps[0].shape)
[1, 96, 56, 56]
AutoFeatureExtractor
For audio tasks, a feature extractor processes the audio signal the correct input format.
Load a feature extractor with [AutoFeatureExtractor.from_pretrained]:
from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained(
"ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
) |
from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained(
"ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
)
AutoProcessor
Multimodal tasks require a processor that combines two types of preprocessing tools. For example, the LayoutLMV2 model requires an image processor to handle images and a tokenizer to handle text; a processor combines both of them.
Load a processor with [AutoProcessor.from_pretrained]: |
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased")
AutoModel
The AutoModelFor classes let you load a pretrained model for a given task (see here for a complete list of available tasks). For example, load a model for sequence classification with [AutoModelForSequenceClassification.from_pretrained]: |
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased")
Easily reuse the same checkpoint to load an architecture for a different task:
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") |
For PyTorch models, the from_pretrained() method uses torch.load() which internally uses pickle and is known to be insecure. In general, never load a model that could have come from an untrusted source, or that could have been tampered with. This security risk is partially mitigated for public models hosted on the Hugging Face Hub, which are scanned for malware at each commit. See the Hub documentation for best practices like signed commit verification with GPG.
TensorFlow and Flax checkpoints are not affected, and can be loaded within PyTorch architectures using the from_tf and from_flax kwargs for the from_pretrained method to circumvent this issue. |
Generally, we recommend using the AutoTokenizer class and the AutoModelFor class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next tutorial, learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. |
Finally, the TFAutoModelFor classes let you load a pretrained model for a given task (see here for a complete list of available tasks). For example, load a model for sequence classification with [TFAutoModelForSequenceClassification.from_pretrained]:
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased")
Easily reuse the same checkpoint to load an architecture for a different task: |
Easily reuse the same checkpoint to load an architecture for a different task:
from transformers import TFAutoModelForTokenClassification
model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased") |
from transformers import TFAutoModelForTokenClassification
model = TFAutoModelForTokenClassification.from_pretrained("distilbert/distilbert-base-uncased")
Generally, we recommend using the AutoTokenizer class and the TFAutoModelFor class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next tutorial, learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning. |
CPU inference
With some optimizations, it is possible to efficiently run large model inference on a CPU. One of these optimization techniques involves compiling the PyTorch code into an intermediate format for high-performance environments like C++. The other technique fuses multiple operations into one kernel to reduce the overhead of running each operation separately.
You'll learn how to use BetterTransformer for faster inference, and how to convert your PyTorch code to TorchScript. If you're using an Intel CPU, you can also use graph optimizations from Intel Extension for PyTorch to boost inference speed even more. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime or OpenVINO (if you're using an Intel CPU).
BetterTransformer
BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: |
fusion, which combines multiple sequential operations into a single "kernel" to reduce the number of computation steps
skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors
BetterTransformer also converts all attention operations to use the more memory-efficient scaled dot product attention.
BetterTransformer is not supported for all models. Check this list to see if a model supports BetterTransformer. |
BetterTransformer is not supported for all models. Check this list to see if a model supports BetterTransformer.
Before you start, make sure you have 🤗 Optimum installed.
Enable BetterTransformer with the [PreTrainedModel.to_bettertransformer] method:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder")
model.to_bettertransformer() |
TorchScript
TorchScript is an intermediate PyTorch model representation that can be run in production environments where performance is important. You can train a model in PyTorch and then export it to TorchScript to free the model from Python performance constraints. PyTorch traces a model to return a [ScriptFunction] that is optimized with just-in-time compilation (JIT). Compared to the default eager mode, JIT mode in PyTorch typically yields better performance for inference using optimization techniques like operator fusion.
For a gentle introduction to TorchScript, see the Introduction to PyTorch TorchScript tutorial.
With the [Trainer] class, you can enable JIT mode for CPU inference by setting the --jit_mode_eval flag: |
python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
--jit_mode_eval |
For PyTorch >= 1.14.0, JIT-mode could benefit any model for prediction and evaluation since the dict input is supported in jit.trace.
For PyTorch < 1.14.0, JIT-mode could benefit a model if its forward parameter order matches the tuple input order in jit.trace, such as a question-answering model. If the forward parameter order does not match the tuple input order in jit.trace, like a text classification model, jit.trace will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users. |
IPEX graph optimization
Intel® Extension for PyTorch (IPEX) provides further optimizations in JIT mode for Intel CPUs, and we recommend combining it with TorchScript for even faster performance. The IPEX graph optimization fuses operations like Multi-head attention, Concat Linear, Linear + Add, Linear + Gelu, Add + LayerNorm, and more.
To take advantage of these graph optimizations, make sure you have IPEX installed: |
pip install intel_extension_for_pytorch
Set the --use_ipex and --jit_mode_eval flags in the [Trainer] class to enable JIT mode with the graph optimizations:
python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
--use_ipex \
--jit_mode_eval
🤗 Optimum |
Learn more details about using ORT with 🤗 Optimum in the Optimum Inference with ONNX Runtime guide. This section only provides a brief and simple example. |
ONNX Runtime (ORT) is a model accelerator that runs inference on CPUs by default. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers, without making too many changes to your code. You only need to replace the 🤗 Transformers AutoClass with its equivalent [~optimum.onnxruntime.ORTModel] for the task you're solving, and load a checkpoint in the ONNX format.
For example, if you're running inference on a question answering task, load the optimum/roberta-base-squad2 checkpoint which contains a model.onnx file: |
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForQuestionAnswering
model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)
question = "What's my name?"
context = "My name is Philipp and I live in Nuremberg."
pred = onnx_qa(question, context) |
If you have an Intel CPU, take a look at 🤗 Optimum Intel which supports a variety of compression techniques (quantization, pruning, knowledge distillation) and tools for converting models to the OpenVINO format for higher performance inference. |
BERTology
There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT
(that some call "BERTology"). Some good examples of this field are: |
BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick:
https://arxiv.org/abs/1905.05950
Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650
What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D.
Manning: https://arxiv.org/abs/1906.04341
CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633 |
In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to
help people access the inner representations, mainly adapted from the great work of Paul Michel
(https://arxiv.org/abs/1905.10650): |
accessing all the hidden-states of BERT/GPT/GPT-2,
accessing all the attention weights for each head of BERT/GPT/GPT-2,
retrieving heads output values and gradients to be able to compute head importance score and prune head as explained
in https://arxiv.org/abs/1905.10650.
To help you understand and use these features, we have added a specific example script: bertology.py while extract information and prune a model pre-trained on
GLUE. |
🤗 Transformers Notebooks
You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community.
If you wrote some notebook(s) leveraging 🤗 Transformers and would like to be listed here, please open a
Pull Request so it can be included under the Community notebooks.
Hugging Face's notebooks 🤗
Documentation notebooks
You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them:
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| Quicktour of the library | A presentation of the various APIs in Transformers || |
| Summary of the tasks | How to run the models of the Transformers library task by task || |
| Preprocessing data | How to use a tokenizer to preprocess your data || |
| Fine-tuning a pretrained model | How to use the Trainer to fine-tune a pretrained model || |
| Summary of the tokenizers | The differences between the tokenizers algorithm || |
| Multilingual models | How to use the multilingual models of the library || |
PyTorch Examples
Natural Language Processing[[pytorch-nlp]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| Train your tokenizer | How to train and use your very own tokenizer || |
| Train your language model | How to easily start using transformers || |
| How to fine-tune a model on text classification| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | | |
| How to fine-tune a model on language modeling| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | | |
| How to fine-tune a model on token classification| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | | |
| How to fine-tune a model on question answering| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | | |
| How to fine-tune a model on multiple choice| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | | |
| How to fine-tune a model on translation| Show how to preprocess the data and fine-tune a pretrained model on WMT. | | |
| How to fine-tune a model on summarization| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | | |
| How to train a language model from scratch| Highlight all the steps to effectively train Transformer model on custom data | | |
| How to generate text| How to use different decoding methods for language generation with transformers | | |
| How to generate text (with constraints)| How to guide language generation with user-provided constraints | | |
| Reformer| How Reformer pushes the limits of language modeling | | |
Computer Vision[[pytorch-cv]]
| Notebook | Description | | |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|
| How to fine-tune a model on image classification (Torchvision) | Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification | | |
| How to fine-tune a model on image classification (Albumentations) | Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image Classification | | |
| How to fine-tune a model on image classification (Kornia) | Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image Classification | | |
| How to perform zero-shot object detection with OWL-ViT | Show how to perform zero-shot object detection on images with text queries | | |
| How to fine-tune an image captioning model | Show how to fine-tune BLIP for image captioning on a custom dataset | | |
| How to build an image similarity system with Transformers | Show how to build an image similarity system | | |
| How to fine-tune a SegFormer model on semantic segmentation | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | | |
| How to fine-tune a VideoMAE model on video classification | Show how to preprocess the data and fine-tune a pretrained VideoMAE model on Video Classification | | |
Audio[[pytorch-audio]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| How to fine-tune a speech recognition model in English| Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT | | |
| How to fine-tune a speech recognition model in any language| Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice | | |
| How to fine-tune a model on audio classification| Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | | |
Biological Sequences[[pytorch-bio]]
| Notebook | Description | | |
|:----------|:----------------------------------------------------------------------------------------|:-------------|------:|
| How to fine-tune a pre-trained protein model | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | | |
| How to generate protein folds | See how to go from protein sequence to a full protein model and PDB file | | |
| How to fine-tune a Nucleotide Transformer model | See how to tokenize DNA and fine-tune a large pre-trained DNA "language" model | | |
| Fine-tune a Nucleotide Transformer model with LoRA | Train even larger DNA models in a memory-efficient way | | |
Other modalities[[pytorch-other]]
| Notebook | Description | | |
|:----------|:----------------------------------------------------------------------------------------|:-------------|------:|
| Probabilistic Time Series Forecasting | See how to train Time Series Transformer on a custom dataset | | |
Utility notebooks[[pytorch-utility]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| How to export model to ONNX| Highlight how to export and run inference workloads through ONNX | | |
| How to use Benchmarks| How to benchmark models with transformers | | |
TensorFlow Examples
Natural Language Processing[[tensorflow-nlp]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| Train your tokenizer | How to train and use your very own tokenizer || |
| Train your language model | How to easily start using transformers || |
| How to fine-tune a model on text classification| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | | |
| How to fine-tune a model on language modeling| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | | |
| How to fine-tune a model on token classification| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | | |
| How to fine-tune a model on question answering| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | | |
| How to fine-tune a model on multiple choice| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | | |
| How to fine-tune a model on translation| Show how to preprocess the data and fine-tune a pretrained model on WMT. | | |
| How to fine-tune a model on summarization| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | | |
Computer Vision[[tensorflow-cv]]
| Notebook | Description | | |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:-------------|------:|
| How to fine-tune a model on image classification | Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification | | |
| How to fine-tune a SegFormer model on semantic segmentation | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | | |
Biological Sequences[[tensorflow-bio]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| How to fine-tune a pre-trained protein model | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | | |
Utility notebooks[[tensorflow-utility]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| How to train TF/Keras models on TPU | See how to train at high speed on Google's TPU hardware | | |
Optimum notebooks
🤗 Optimum is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| How to quantize a model with ONNX Runtime for text classification| Show how to apply static and dynamic quantization on a model using ONNX Runtime for any GLUE task. | | |
| How to quantize a model with Intel Neural Compressor for text classification| Show how to apply static, dynamic and aware training quantization on a model using Intel Neural Compressor (INC) for any GLUE task. | | |
| How to fine-tune a model on text classification with ONNX Runtime| Show how to preprocess the data and fine-tune a model on any GLUE task using ONNX Runtime. | | |
| How to fine-tune a model on summarization with ONNX Runtime| Show how to preprocess the data and fine-tune a model on XSUM using ONNX Runtime. | | |
Community notebooks:
More notebooks developed by the community are available here. |
Testing
Let's take a look at how 🤗 Transformers models are tested and how you can write new tests and improve the existing ones.
There are 2 test suites in the repository:
tests -- tests for the general API
examples -- tests primarily for various applications that aren't part of the API
How transformers are tested |
tests -- tests for the general API
examples -- tests primarily for various applications that aren't part of the API
How transformers are tested
Once a PR is submitted it gets tested with 9 CircleCi jobs. Every new commit to that PR gets retested. These jobs
are defined in this config file, so that if needed you can reproduce the same
environment on your machine.
These CI jobs don't run @slow tests.
There are 3 jobs run by github actions: |
These CI jobs don't run @slow tests.
There are 3 jobs run by github actions:
torch hub integration: checks whether torch hub
integration works.
self-hosted (push): runs fast tests on GPU only on commits on
main. It only runs if a commit on main has updated the code in one of the following folders: src,
tests, .github (to prevent running on added model cards, notebooks, etc.)
self-hosted runner: runs normal and slow tests on GPU in
tests and examples: |
self-hosted runner: runs normal and slow tests on GPU in
tests and examples:
RUN_SLOW=1 pytest tests/
RUN_SLOW=1 pytest examples/
The results can be observed here.
Running tests
Choosing which tests to run
This document goes into many details of how tests can be run. If after reading everything, you need even more details
you will find them here.
Here are some most useful ways of running tests.
Run all:
console
pytest
or:
make test
Note that the latter is defined as: |
make test
Note that the latter is defined as:
python -m pytest -n auto --dist=loadfile -s -v ./tests/
which tells pytest to:
run as many test processes as they are CPU cores (which could be too many if you don't have a ton of RAM!)
ensure that all tests from the same file will be run by the same test process
do not capture output
run in verbose mode
Getting the list of all tests
All tests of the test suite:
pytest --collect-only -q
All tests of a given test file: |
Getting the list of all tests
All tests of the test suite:
pytest --collect-only -q
All tests of a given test file:
pytest tests/test_optimization.py --collect-only -q
Run a specific test module
To run an individual test module:
pytest tests/utils/test_logging.py
Run specific tests
Since unittest is used inside most of the tests, to run specific subtests you need to know the name of the unittest
class containing those tests. For example, it could be: |
pytest tests/test_optimization.py::OptimizationTest::test_adam_w
Here:
tests/test_optimization.py - the file with tests
OptimizationTest - the name of the class
test_adam_w - the name of the specific test function
If the file contains multiple classes, you can choose to run only tests of a given class. For example: |
If the file contains multiple classes, you can choose to run only tests of a given class. For example:
pytest tests/test_optimization.py::OptimizationTest
will run all the tests inside that class.
As mentioned earlier you can see what tests are contained inside the OptimizationTest class by running:
pytest tests/test_optimization.py::OptimizationTest --collect-only -q
You can run tests by keyword expressions.
To run only tests whose name contains adam: |
pytest tests/test_optimization.py::OptimizationTest --collect-only -q
You can run tests by keyword expressions.
To run only tests whose name contains adam:
pytest -k adam tests/test_optimization.py
Logical and and or can be used to indicate whether all keywords should match or either. not can be used to
negate.
To run all tests except those whose name contains adam:
pytest -k "not adam" tests/test_optimization.py
And you can combine the two patterns in one: |
pytest -k "not adam" tests/test_optimization.py
And you can combine the two patterns in one:
pytest -k "ada and not adam" tests/test_optimization.py
For example to run both test_adafactor and test_adam_w you can use:
pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py
Note that we use or here, since we want either of the keywords to match to include both.
If you want to include only tests that include both patterns, and is to be used: |
pytest -k "test and ada" tests/test_optimization.py
Run accelerate tests
Sometimes you need to run accelerate tests on your models. For that you can just add -m accelerate_tests to your command, if let's say you want to run these tests on OPT run: |
RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py
Run documentation tests
In order to test whether the documentation examples are correct, you should check that the doctests are passing.
As an example, let's use WhisperModel.forward's docstring:
thon
r"""
Returns:
Example:
thon
>>> import torch
>>> from transformers import WhisperModel, WhisperFeatureExtractor
>>> from datasets import load_dataset
>>> model = WhisperModel.from_pretrained("openai/whisper-base")
>>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt")
>>> input_features = inputs.input_features
>>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id
>>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state
>>> list(last_hidden_state.shape)
[1, 2, 512]
```""" |
Just run the following line to automatically test every docstring example in the desired file:
pytest --doctest-modules <path_to_file_or_dir>
If the file has a markdown extention, you should add the --doctest-glob="*.md" argument.
Run only modified tests
You can run the tests related to the unstaged files or the current branch (according to Git) by using pytest-picked. This is a great way of quickly testing your changes didn't break
anything, since it won't run the tests related to files you didn't touch. |
pip install pytest-picked |
pytest --picked
All tests will be run from files and folders which are modified, but not yet committed.
Automatically rerun failed tests on source modification
pytest-xdist provides a very useful feature of detecting all failed
tests, and then waiting for you to modify files and continuously re-rerun those failing tests until they pass while you
fix them. So that you don't need to re start pytest after you made the fix. This is repeated until all tests pass after
which again a full run is performed. |
pip install pytest-xdist
To enter the mode: pytest -f or pytest --looponfail
File changes are detected by looking at looponfailroots root directories and all of their contents (recursively).
If the default for this value does not work for you, you can change it in your project by setting a configuration
option in setup.cfg:
ini
[tool:pytest]
looponfailroots = transformers tests
or pytest.ini/tox.ini files:
ini
[pytest]
looponfailroots = transformers tests
This would lead to only looking for file changes in the respective directories, specified relatively to the ini-file’s
directory.
pytest-watch is an alternative implementation of this functionality.
Skip a test module
If you want to run all test modules, except a few you can exclude them by giving an explicit list of tests to run. For
example, to run all except test_modeling_*.py tests: |
pytest *ls -1 tests/*py | grep -v test_modeling*
Clearing state
CI builds and when isolation is important (against speed), cache should be cleared: |
pytest --cache-clear tests
Running tests in parallel
As mentioned earlier make test runs tests in parallel via pytest-xdist plugin (-n X argument, e.g. -n 2
to run 2 parallel jobs).
pytest-xdist's --dist= option allows one to control how the tests are grouped. --dist=loadfile puts the
tests located in one file onto the same process.
Since the order of executed tests is different and unpredictable, if running the test suite with pytest-xdist
produces failures (meaning we have some undetected coupled tests), use pytest-replay to replay the tests in the same order, which should help with then somehow
reducing that failing sequence to a minimum.
Test order and repetition
It's good to repeat the tests several times, in sequence, randomly, or in sets, to detect any potential
inter-dependency and state-related bugs (tear down). And the straightforward multiple repetition is just good to detect
some problems that get uncovered by randomness of DL.
Repeat tests |
pytest-flakefinder:
pip install pytest-flakefinder
And then run every test multiple times (50 by default):
pytest --flake-finder --flake-runs=5 tests/test_failing_test.py
This plugin doesn't work with -n flag from pytest-xdist.
There is another plugin pytest-repeat, but it doesn't work with unittest.
Run tests in a random order |
There is another plugin pytest-repeat, but it doesn't work with unittest.
Run tests in a random order
pip install pytest-random-order
Important: the presence of pytest-random-order will automatically randomize tests, no configuration change or
command line options is required.
As explained earlier this allows detection of coupled tests - where one test's state affects the state of another. When
pytest-random-order is installed it will print the random seed it used for that session, e.g: |
pytest tests
[]
Using --random-order-bucket=module
Using --random-order-seed=573663
So that if the given particular sequence fails, you can reproduce it by adding that exact seed, e.g.: |
pytest --random-order-seed=573663
[]
Using --random-order-bucket=module
Using --random-order-seed=573663
It will only reproduce the exact order if you use the exact same list of tests (or no list at all). Once you start to
manually narrowing down the list you can no longer rely on the seed, but have to list them manually in the exact order
they failed and tell pytest to not randomize them instead using --random-order-bucket=none, e.g.: |
pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py
To disable the shuffling for all tests: |
pytest --random-order-bucket=none
By default --random-order-bucket=module is implied, which will shuffle the files on the module levels. It can also
shuffle on class, package, global and none levels. For the complete details please see its
documentation.
Another randomization alternative is: pytest-randomly. This
module has a very similar functionality/interface, but it doesn't have the bucket modes available in
pytest-random-order. It has the same problem of imposing itself once installed.
Look and feel variations
pytest-sugar
pytest-sugar is a plugin that improves the look-n-feel, adds a
progressbar, and show tests that fail and the assert instantly. It gets activated automatically upon installation. |
pip install pytest-sugar
To run tests without it, run:
pytest -p no:sugar
or uninstall it.
Report each sub-test name and its progress
For a single or a group of tests via pytest (after pip install pytest-pspec):
pytest --pspec tests/test_optimization.py
Instantly shows failed tests
pytest-instafail shows failures and errors instantly instead of
waiting until the end of test session.
pip install pytest-instafail |
pip install pytest-instafail
pytest --instafail
To GPU or not to GPU
On a GPU-enabled setup, to test in CPU-only mode add CUDA_VISIBLE_DEVICES="":
CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py
or if you have multiple gpus, you can specify which one is to be used by pytest. For example, to use only the
second gpu if you have gpus 0 and 1, you can run: |
CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py
This is handy when you want to run different tasks on different GPUs.
Some tests must be run on CPU-only, others on either CPU or GPU or TPU, yet others on multiple-GPUs. The following skip
decorators are used to set the requirements of tests CPU/GPU/TPU-wise: |
require_torch - this test will run only under torch
require_torch_gpu - as require_torch plus requires at least 1 GPU
require_torch_multi_gpu - as require_torch plus requires at least 2 GPUs
require_torch_non_multi_gpu - as require_torch plus requires 0 or 1 GPUs
require_torch_up_to_2_gpus - as require_torch plus requires 0 or 1 or 2 GPUs
require_torch_tpu - as require_torch plus requires at least 1 TPU |
Let's depict the GPU requirements in the following table:
| n gpus | decorator |
|--------+--------------------------------|
| >= 0 | @require_torch |
| >= 1 | @require_torch_gpu |
| >= 2 | @require_torch_multi_gpu |
| < 2 | @require_torch_non_multi_gpu |
| < 3 | @require_torch_up_to_2_gpus |
For example, here is a test that must be run only when there are 2 or more GPUs available and pytorch is installed:
python no-style
@require_torch_multi_gpu
def test_example_with_multi_gpu():
If a test requires tensorflow use the require_tf decorator. For example:
python no-style
@require_tf
def test_tf_thing_with_tensorflow():
These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is
how to set it up:
python no-style
@require_torch_gpu
@slow
def test_example_slow_on_gpu():
Some decorators like @parametrized rewrite test names, therefore @require_* skip decorators have to be listed
last for them to work correctly. Here is an example of the correct usage:
python no-style
@parameterized.expand()
@require_torch_multi_gpu
def test_integration_foo():
This order problem doesn't exist with @pytest.mark.parametrize, you can put it first or last and it will still
work. But it only works with non-unittests.
Inside tests: |
How many GPUs are available:
thon
from transformers.testing_utils import get_gpu_count
n_gpu = get_gpu_count() # works with torch and tf
Testing with a specific PyTorch backend or device
To run the test suite on a specific torch device add TRANSFORMERS_TEST_DEVICE="$device" where $device is the target backend. For example, to test on CPU only: |
TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py
This variable is useful for testing custom or less common PyTorch backends such as mps. It can also be used to achieve the same effect as CUDA_VISIBLE_DEVICES by targeting specific GPUs or testing in CPU-only mode.
Certain devices will require an additional import after importing torch for the first time. This can be specified using the environment variable TRANSFORMERS_TEST_BACKEND: |
TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py
Alternative backends may also require the replacement of device-specific functions. For example torch.cuda.manual_seed may need to be replaced with a device-specific seed setter like torch.npu.manual_seed to correctly set a random seed on the device. To specify a new backend with backend-specific device functions when running the test suite, create a Python device specification file in the format: |
import torch
import torch_npu
!! Further additional imports can be added here !!
Specify the device name (eg. 'cuda', 'cpu', 'npu')
DEVICE_NAME = 'npu'
Specify device-specific backends to dispatch to.
If not specified, will fallback to 'default' in 'testing_utils.py`
MANUAL_SEED_FN = torch.npu.manual_seed
EMPTY_CACHE_FN = torch.npu.empty_cache
DEVICE_COUNT_FN = torch.npu.device_count
``
This format also allows for specification of any additional imports required. To use this file to replace equivalent methods in the test suite, set the environment variableTRANSFORMERS_TEST_DEVICE_SPEC` to the path of the spec file.
Currently, only MANUAL_SEED_FN, EMPTY_CACHE_FN and DEVICE_COUNT_FN are supported for device-specific dispatch.
Distributed training
pytest can't deal with distributed training directly. If this is attempted - the sub-processes don't do the right
thing and end up thinking they are pytest and start running the test suite in loops. It works, however, if one
spawns a normal process that then spawns off multiple workers and manages the IO pipes.
Here are some tests that use it: |
test_trainer_distributed.py
test_deepspeed.py
To jump right into the execution point, search for the execute_subprocess_async call in those tests.
You will need at least 2 GPUs to see these tests in action: |
CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py
Output capture
During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails, its
according captured output will usually be shown along with the failure traceback.
To disable output capturing and to get the stdout and stderr normally, use -s or --capture=no:
pytest -s tests/utils/test_logging.py
To send test results to JUnit format output: |
pytest -s tests/utils/test_logging.py
To send test results to JUnit format output:
py.test tests --junitxml=result.xml
Color control
To have no color (e.g., yellow on white background is not readable):
pytest --color=no tests/utils/test_logging.py
Sending test report to online pastebin service
Creating a URL for each test failure: |
pytest --color=no tests/utils/test_logging.py
Sending test report to online pastebin service
Creating a URL for each test failure:
pytest --pastebin=failed tests/utils/test_logging.py
This will submit test run information to a remote Paste service and provide a URL for each failure. You may select
tests as usual or add for example -x if you only want to send one particular failure.
Creating a URL for a whole test session log: |
pytest --pastebin=all tests/utils/test_logging.py
Writing tests
🤗 transformers tests are based on unittest, but run by pytest, so most of the time features from both systems
can be used.
You can read here which features are supported, but the important
thing to remember is that most pytest fixtures don't work. Neither parametrization, but we use the module
parameterized that works in a similar way.
Parametrization
Often, there is a need to run the same test multiple times, but with different arguments. It could be done from within
the test, but then there is no way of running that test for just one set of arguments.
thon
test_this1.py
import unittest
from parameterized import parameterized
class TestMathUnitTest(unittest.TestCase):
@parameterized.expand(
[
("negative", -1.5, -2.0),
("integer", 1, 1.0),
("large fraction", 1.6, 1),
]
)
def test_floor(self, name, input, expected):
assert_equal(math.floor(input), expected) |
Now, by default this test will be run 3 times, each time with the last 3 arguments of test_floor being assigned the
corresponding arguments in the parameter list.
and you could run just the negative and integer sets of params with:
pytest -k "negative and integer" tests/test_mytest.py
or all but negative sub-tests, with: |
pytest -k "negative and integer" tests/test_mytest.py
or all but negative sub-tests, with:
pytest -k "not negative" tests/test_mytest.py
Besides using the -k filter that was just mentioned, you can find out the exact name of each sub-test and run any
or all of them using their exact names.
pytest test_this1.py --collect-only -q
and it will list: |
pytest test_this1.py --collect-only -q
and it will list:
test_this1.py::TestMathUnitTest::test_floor_0_negative
test_this1.py::TestMathUnitTest::test_floor_1_integer
test_this1.py::TestMathUnitTest::test_floor_2_large_fraction
So now you can run just 2 specific sub-tests: |
pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer
The module parameterized which is already in the developer dependencies
of transformers works for both: unittests and pytest tests.
If, however, the test is not a unittest, you may use pytest.mark.parametrize (or you may see it being used in
some existing tests, mostly under examples).
Here is the same example, this time using pytest's parametrize marker:
thon
test_this2.py
import pytest
@pytest.mark.parametrize(
"name, input, expected",
[
("negative", -1.5, -2.0),
("integer", 1, 1.0),
("large fraction", 1.6, 1),
],
)
def test_floor(name, input, expected):
assert_equal(math.floor(input), expected) |
Same as with parameterized, with pytest.mark.parametrize you can have a fine control over which sub-tests are
run, if the -k filter doesn't do the job. Except, this parametrization function creates a slightly different set of
names for the sub-tests. Here is what they look like:
pytest test_this2.py --collect-only -q
and it will list: |
pytest test_this2.py --collect-only -q
and it will list:
test_this2.py::test_floor[integer-1-1.0]
test_this2.py::test_floor[negative--1.5--2.0]
test_this2.py::test_floor[large fraction-1.6-1]
So now you can run just the specific test: |
pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0]
as in the previous example.
Files and directories
In tests often we need to know where things are relative to the current test file, and it's not trivial since the test
could be invoked from more than one directory or could reside in sub-directories with different depths. A helper class
transformers.test_utils.TestCasePlus solves this problem by sorting out all the basic paths and provides easy
accessors to them: |
pathlib objects (all fully resolved):
test_file_path - the current test file path, i.e. __file__
test_file_dir - the directory containing the current test file
tests_dir - the directory of the tests test suite
examples_dir - the directory of the examples test suite
repo_root_dir - the directory of the repository
src_dir - the directory of src (i.e. where the transformers sub-dir resides)
stringified paths---same as above but these return paths as strings, rather than pathlib objects: |
src_dir - the directory of src (i.e. where the transformers sub-dir resides)
stringified paths---same as above but these return paths as strings, rather than pathlib objects:
test_file_path_str
test_file_dir_str
tests_dir_str
examples_dir_str
repo_root_dir_str
src_dir_str |
test_file_path_str
test_file_dir_str
tests_dir_str
examples_dir_str
repo_root_dir_str
src_dir_str
To start using those all you need is to make sure that the test resides in a subclass of
transformers.test_utils.TestCasePlus. For example:
thon
from transformers.testing_utils import TestCasePlus
class PathExampleTest(TestCasePlus):
def test_something_involving_local_locations(self):
data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" |
If you don't need to manipulate paths via pathlib or you just need a path as a string, you can always invoked
str() on the pathlib object or use the accessors ending with _str. For example:
thon
from transformers.testing_utils import TestCasePlus
class PathExampleTest(TestCasePlus):
def test_something_involving_stringified_locations(self):
examples_dir = self.examples_dir_str |
Temporary files and directories
Using unique temporary files and directories are essential for parallel test running, so that the tests won't overwrite
each other's data. Also we want to get the temporary files and directories removed at the end of each test that created
them. Therefore, using packages like tempfile, which address these needs is essential.
However, when debugging tests, you need to be able to see what goes into the temporary file or directory and you want
to know it's exact path and not having it randomized on every test re-run.
A helper class transformers.test_utils.TestCasePlus is best used for such purposes. It's a sub-class of
unittest.TestCase, so we can easily inherit from it in the test modules.
Here is an example of its usage:
thon
from transformers.testing_utils import TestCasePlus
class ExamplesTests(TestCasePlus):
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir() |
This code creates a unique temporary directory, and sets tmp_dir to its location.
Create a unique temporary dir:
python
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir()
tmp_dir will contain the path to the created temporary dir. It will be automatically removed at the end of the
test.
Create a temporary dir of my choice, ensure it's empty before the test starts and don't empty it after the test. |
Create a temporary dir of my choice, ensure it's empty before the test starts and don't empty it after the test.
python
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir("./xxx")
This is useful for debug when you want to monitor a specific directory and want to make sure the previous tests didn't
leave any data in there.
You can override the default behavior by directly overriding the before and after args, leading to one of the
following behaviors: |
You can override the default behavior by directly overriding the before and after args, leading to one of the
following behaviors:
before=True: the temporary dir will always be cleared at the beginning of the test.
before=False: if the temporary dir already existed, any existing files will remain there.
after=True: the temporary dir will always be deleted at the end of the test.
after=False: the temporary dir will always be left intact at the end of the test. |
In order to run the equivalent of rm -r safely, only subdirs of the project repository checkout are allowed if
an explicit tmp_dir is used, so that by mistake no /tmp or similar important part of the filesystem will
get nuked. i.e. please always pass paths that start with ./.
Each test can register multiple temporary directories and they all will get auto-removed, unless requested
otherwise. |
Each test can register multiple temporary directories and they all will get auto-removed, unless requested
otherwise.
Temporary sys.path override
If you need to temporary override sys.path to import from another test for example, you can use the
ExtendSysPath context manager. Example:
thon
import os
from transformers.testing_utils import ExtendSysPath
bindir = os.path.abspath(os.path.dirname(file))
with ExtendSysPath(f"{bindir}/.."):
from test_trainer import TrainerIntegrationCommon # noqa |
Skipping tests
This is useful when a bug is found and a new test is written, yet the bug is not fixed yet. In order to be able to
commit it to the main repository we need make sure it's skipped during make test.
Methods: |
A skip means that you expect your test to pass only if some conditions are met, otherwise pytest should skip
running the test altogether. Common examples are skipping windows-only tests on non-windows platforms, or skipping
tests that depend on an external resource which is not available at the moment (for example a database). |
A xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet
implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with
pytest.mark.xfail), it’s an xpass and will be reported in the test summary.
One of the important differences between the two is that skip doesn't run the test, and xfail does. So if the
code that's buggy causes some bad state that will affect other tests, do not use xfail.
Implementation |
Here is how to skip whole test unconditionally: |
python no-style
@unittest.skip("this bug needs to be fixed")
def test_feature_x():
or via pytest:
python no-style
@pytest.mark.skip(reason="this bug needs to be fixed")
or the xfail way:
python no-style
@pytest.mark.xfail
def test_feature_x():
Here's how to skip a test based on internal checks within the test:
python
def test_feature_x():
if not has_something():
pytest.skip("unsupported configuration")
or the whole module:
thon
import pytest
if not pytest.config.getoption("--custom-flag"):
pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True) |
or the xfail way:
python
def test_feature_x():
pytest.xfail("expected to fail until bug XYZ is fixed")
Here is how to skip all tests in a module if some import is missing:
python
docutils = pytest.importorskip("docutils", minversion="0.3")
Skip a test based on a condition: |
python no-style
@pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher")
def test_feature_x():
or:
python no-style
@unittest.skipIf(torch_device == "cpu", "Can't do half precision")
def test_feature_x():
or skip the whole module:
python no-style
@pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows")
class TestClass():
def test_feature_x(self):
More details, example and ways are here.
Slow tests
The library of tests is ever-growing, and some of the tests take minutes to run, therefore we can't afford waiting for
an hour for the test suite to complete on CI. Therefore, with some exceptions for essential tests, slow tests should be
marked as in the example below:
python no-style
from transformers.testing_utils import slow
@slow
def test_integration_foo():
Once a test is marked as @slow, to run such tests set RUN_SLOW=1 env var, e.g.: |
RUN_SLOW=1 pytest tests
Some decorators like @parameterized rewrite test names, therefore @slow and the rest of the skip decorators
@require_* have to be listed last for them to work correctly. Here is an example of the correct usage:
python no-style
@parameterized.expand()
@slow
def test_integration_foo():
As explained at the beginning of this document, slow tests get to run on a scheduled basis, rather than in PRs CI
checks. So it's possible that some problems will be missed during a PR submission and get merged. Such problems will
get caught during the next scheduled CI job. But it also means that it's important to run the slow tests on your
machine before submitting the PR.
Here is a rough decision making mechanism for choosing which tests should be marked as slow:
If the test is focused on one of the library's internal components (e.g., modeling files, tokenization files,
pipelines), then we should run that test in the non-slow test suite. If it's focused on an other aspect of the library,
such as the documentation or the examples, then we should run these tests in the slow test suite. And then, to refine
this approach we should have exceptions: |
All tests that need to download a heavy set of weights or a dataset that is larger than ~50MB (e.g., model or
tokenizer integration tests, pipeline integration tests) should be set to slow. If you're adding a new model, you
should create and upload to the hub a tiny version of it (with random weights) for integration tests. This is
discussed in the following paragraphs.
All tests that need to do a training not specifically optimized to be fast should be set to slow.
We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly slow, and set them to
@slow. Auto-modeling tests, which save and load large files to disk, are a good example of tests that are marked
as @slow.
If a test completes under 1 second on CI (including downloads if any) then it should be a normal test regardless. |
Collectively, all the non-slow tests need to cover entirely the different internals, while remaining fast. For example,
a significant coverage can be achieved by testing with specially created tiny models with random weights. Such models
have the very minimal number of layers (e.g., 2), vocab size (e.g., 1000), etc. Then the @slow tests can use large
slow models to do qualitative testing. To see the use of these simply look for tiny models with: |
grep tiny tests examples
Here is a an example of a script that created the tiny model
stas/tiny-wmt19-en-de. You can easily adjust it to your specific
model's architecture.
It's easy to measure the run-time incorrectly if for example there is an overheard of downloading a huge model, but if
you test it locally the downloaded files would be cached and thus the download time not measured. Hence check the
execution speed report in CI logs instead (the output of pytest --durations=0 tests).
That report is also useful to find slow outliers that aren't marked as such, or which need to be re-written to be fast.
If you notice that the test suite starts getting slow on CI, the top listing of this report will show the slowest
tests.
Testing the stdout/stderr output
In order to test functions that write to stdout and/or stderr, the test can access those streams using the
pytest's capsys system. Here is how this is accomplished:
thon
import sys
def print_to_stdout(s):
print(s)
def print_to_stderr(s):
sys.stderr.write(s)
def test_result_and_stdout(capsys):
msg = "Hello"
print_to_stdout(msg)
print_to_stderr(msg)
out, err = capsys.readouterr() # consume the captured output streams
# optional: if you want to replay the consumed streams:
sys.stdout.write(out)
sys.stderr.write(err)
# test:
assert msg in out
assert msg in err |
And, of course, most of the time, stderr will come as a part of an exception, so try/except has to be used in such
a case:
thon
def raise_exception(msg):
raise ValueError(msg)
def test_something_exception():
msg = "Not a good value"
error = ""
try:
raise_exception(msg)
except Exception as e:
error = str(e)
assert msg in error, f"{msg} is in the exception:\n{error}" |
Another approach to capturing stdout is via contextlib.redirect_stdout:
thon
from io import StringIO
from contextlib import redirect_stdout
def print_to_stdout(s):
print(s)
def test_result_and_stdout():
msg = "Hello"
buffer = StringIO()
with redirect_stdout(buffer):
print_to_stdout(msg)
out = buffer.getvalue()
# optional: if you want to replay the consumed streams:
sys.stdout.write(out)
# test:
assert msg in out |
An important potential issue with capturing stdout is that it may contain \r characters that in normal print
reset everything that has been printed so far. There is no problem with pytest, but with pytest -s these
characters get included in the buffer, so to be able to have the test run with and without -s, you have to make an
extra cleanup to the captured output, using re.sub(r'~.*\r', '', buf, 0, re.M).
But, then we have a helper context manager wrapper to automatically take care of it all, regardless of whether it has
some \r's in it or not, so it's a simple:
thon
from transformers.testing_utils import CaptureStdout
with CaptureStdout() as cs:
function_that_writes_to_stdout()
print(cs.out) |