shrinath-suresh/llama-finetune
Text Generation
•
Updated
•
6
text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
layout: blog_detail
title: "PyTorch Trace Analysis for the Masses"
author: Anupam Bhatnagar, Xizhou Feng, Brian Coutinho, Yifan Liu, Sung-Han Lin, Louis Feng, and Yuzhen Huang
We are excited to announce the public release of Holistic Trace Analysis (HTA), an open source performance analysis and visualization Python library for PyTorch users. HTA takes as input Kineto traces collected by the PyTorch profiler, which are complex and challenging to interpret, and up-levels the performance information contained in these traces. It was initially developed internally at Meta to understand and debug performance problems for large-scale distributed training jobs on GPUs. The multidisciplinary team has made a number of enhancements to HTA’s features and scaled them to support state-of-the-art ML workloads. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
ML researchers and systems engineers often struggle to computationally scale up their models because they are not aware of the performance bottlenecks in their workloads. The resources requested for a job (e.g. GPUs, memory) are often misaligned with the resources actually required due to lack of visibility “under the hood”. To achieve the best performance from the hardware stack, it is imperative to understand the resource utilization and bottlenecks for distributed training workloads.
The initial HTA implementation was specifically targeted at Deep Learning Based Recommendation Models (DLRM). To make the features in HTA generic and applicable to use cases such as analyzing Vision and NLP models, we decided to refactor the HTA codebase and make the library available to the larger community. This new codebase has implemented several important ideas which lead to significant efficiency and performance improvements. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
In this blog, we present several features implemented in the open source version of HTA, which can be used as a Python script as well as interactively in a Jupyter notebook. HTA provides the following features:
Breakdown by Dimensions
Temporal: Breakdown of GPU time in terms of time spent in computation, communication, memory events, and idle time on a single node and across all ranks.
Idle Time: Breakdown of GPU idle time into waiting for the host, waiting for another kernel or attributed to an unknown cause.
Kernel: Find kernels with the longest duration on each rank.
Communication Computation Overlap: Calculate the percentage of time when communication overlaps computation.
Statistical Analysis
Kernel Duration Distribution: Distribution of average time taken by longest kernels across different ranks.
CUDA Kernel Launch: Distributions of GPU kernels with very small duration, large duration, and excessive launch time.
| https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Augmented Counters (Memory bandwidth, Queue length): Augmented trace files which provide insights into memory copy bandwidth and number of outstanding operations on each CUDA stream.
Patterns
Frequent CUDA Kernels: Find the CUDA kernels most frequently launched by any given PyTorch or user defined operator.
Trace Comparison
Trace Diff: A trace comparison tool to identify and visualize the differences between traces.
HTA source code is available to users via Github. Users can request new features or build their own analysis using the core libraries and data structures provided in the codebase in addition to the features mentioned above.
GPU Training Performance Debugging 101
To understand the GPU performance in distributed training jobs, we consider how the model operators interact with the GPU devices and how such interactions are reflected in certain measurable metrics. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
At a high level, we can break down the GPU operations in a model execution into three broad categories, henceforth referred to as kernel types:
1. Computation (COMP) - Compute kernels execute compiled routines for matrix multiplication and similar numeric calculations. They are responsible for all of the number-crunching necessary for model execution.
1. Communication (COMM) - Communication kernels are routines which are responsible for exchanging and synchronizing data between different GPU devices in a distributed training job. The NVIDIA Collective Communication Library (NCCL) is a widely used communication library and all its kernels have the prefix “nccl”. Example NCCL kernels include NCCL_AllGather, NCCL_ReduceScatter, NCCL_AllReduce, etc. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Memory (MEM) - Memory kernels manage the memory allocations/deallocations on the GPU devices and data movement between the memory space on the host and the GPUs. The memory kernels include Memcpy_H2D, Memcpy_D2H, Memcpy_D2D, Memset, etc. Here, H represents the Host and D represents the GPU Device. Thus, H2D, D2H, D2D stands for Host to Device, Device to Host and Device to Device respectively.
Because a modern GPU device like the NVIDIA A100 GPU is a massively parallel device which is capable of running multiple kernels simultaneously, it is possible to overlap the computation, communication, and memory kernels to reduce the model execution time. One common technique to achieve the overlap is to utilize multiple CUDA streams. A CUDA stream is a sequence of operations that execute on a GPU device in the order in which they are issued by the host code. Different CUDA streams can be interleaved and even run concurrently, thus achieving the effect of kernel overlap. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
To help understand the above concepts, Figure 1 provides a timeline of the GPU kernels in a sample distributed training job on 8 GPUs for one iteration. In the figure below, each rank represents one GPU and the kernels on each GPU run on 6 CUDA streams. In the right column of the figure, you can see names of the GPU kernels used. In the middle of the figure, you see the overlap between compute and communicate kernels. This figure is created using the plot_timeline example notebook available in HTA.
Figure 1. An example of the execution timeline of GPU Kernels across multiple ranks | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
The performance of multiple GPU training jobs is affected by multiple factors. Among these factors, how does a model execution create and orchestrate the GPU kernels plays a critical role. HTA provides insights on how the model execution interacts with the GPU devices and highlights the opportunities for performance improvement.
With the features we built in HTA, we aim to provide users insights into “what is happening under the hood in a distributed GPU training?” We briefly describe these features in the next few paragraphs.
Features in Holistic Trace Analysis
For most users, understanding the performance of GPU training jobs is nontrivial. Thus, we built this library to simplify the task of trace analysis and provide the user useful insights by examining the model execution traces. As the first step, we developed features which are important and generic enough so that most users can benefit from this library. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Temporal Breakdown: We begin by asking whether the GPU is spending time on computation, communication, memory events, or is it idle? To answer this question, the temporal breakdown feature presents a breakdown in terms of these categories. To achieve high training efficiency the code should maximize time used by computation kernels and minimize idle time and non-compute time (time used by communication or memory kernels). This is accomplished by implementing concurrent execution of computation kernels with communication or memory kernels. Note that, during concurrent execution of computation kernels with communication/memory kernels the time spent by communication/memory kernels is accounted for under compute time.
Figure 2: Temporal Breakdown across 8 GPUs | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Kernel Breakdown: It is natural to ask which kernels are taking the most amount of time. The next feature breaks down the time spent within each kernel type (COMM, COMP, MEM) and sorts them by duration. We present this information for each kernel type and for each rank as a pie chart. See figure 3 below.
Figure 3: Pie chart of top computation and communication kernels | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Kernel Duration Distribution: Subsequently, one can also ask - for any given kernel, what is the distribution of the time spent across the ranks? To answer this, HTA generates bar graphs for the average duration of a given kernel across all ranks. Additionally, the error bars in the bar graphs show the minimum and maximum amount of time taken by a given kernel on a given rank. Figure 4 below shows a discrepancy between average duration on rank 0 as compared to other ranks. This anomalous behavior on rank 0 guides the user on where to look for possible bugs.
Figure 4: Average duration of NCCL AllReduce Kernel across 8 ranks | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Communication Computation Overlap: In distributed training, a significant amount of time is spent in communication and synchronization events among multiple GPU devices. To achieve high GPU efficiency (i.e. TFLOPS/GPU) it is vital to keep the GPU doing actual computation work. In other words, a GPU should not be blocked because of waiting for data from other GPUs. One way to measure the extent to which computation is blocked by data dependencies is to calculate the computation-communication overlap. Higher GPU efficiency is observed if communication events overlap computation events. Lack of communication and computation overlap will lead to the GPU being idle, thus the efficiency would be low. Thus, the communication computation overlap feature calculates the percentage of time communication and computation overlap in a job for each rank and generates a bar graph representation. See figure below. More precisely, we measure the following ratio | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
(time spent in computation while communicating) / (time spent in communication)
Figure 5: Communication computation overlap
Augmented Counters (Queue length, Memory bandwidth): To aid in debugging, HTA calculates the memory bandwidth statistics for D2H, H2D and D2D memory copy (memcpy) and memory set (memset) events. Additionally, HTA also computes the number of outstanding CUDA operations on each CUDA stream. We refer to this as queue length. When the queue length on a stream is 1024 or larger new events cannot be scheduled on that stream and the CPU will stall until the GPU events have processed. Additionally, HTA generates a new trace file containing tracks with the memory bandwidth and queue length time series. See Figure 6 below.
| https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Figure 6: Memory Bandwidth and Queue Length
These primary features give us a peek into the system performance and help answer “what is happening in the system?”. As HTA evolves, we hope to address “why is X happening?” and also suggest possible solutions to overcome the bottlenecks.
Installation and Usage
Installation
For installing the HTA please refer to the README. In brief, the user is required to clone the repo and install the necessary Python packages via pip.
Usage | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
Usage
This version of Holistic Trace Analysis is currently in beta and we recommend using HTA in a Jupyter notebook. A demo notebook is provided for your convenience. To get started, import the hta package in a Jupyter notebook, create a TraceAnalysis object and off we go in exactly two lines of code.
from hta.trace_analysis import TraceAnalysis
analyzer = TraceAnalysis(trace_dir = “/trace/folder/path”)
Requirements
All trace files for a training or inference job must be stored in a unique folder.
Trace files are in json or gzipped json format.
FAQ
Q. How can I install HTA?
Please see the README in the root directory of the repository.
Q. Is there any documentation on the features and API in HTA? | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
The documentation and detailed API is available here.
Q. Can you implement feature X?
Depending on how widely the feature is needed and the level of effort required to implement it we would consider developing the feature. Please open a Github Issue and tag it with the feature-request label.
Q. Can I modify the code?
Please do and send a PR along the way, if you think it would be useful for others.
Q. How can I collect traces in PyTorch?
Please refer to this tutorial here.
Q. Can HTA be used at production scale?
Yes, please see a use case study here. | https://pytorch.org/blog/trace-analysis-for-masses/ | pytorch blogs |
layout: blog_detail
title: 'PyTorch adds new dev tools as it hits production scale'
author: The PyTorch Team
This is a partial re-post of the original blog post on the Facebook AI Blog. The full post can be viewed here
Since its release just a few months ago, PyTorch 1.0 has been rapidly adopted as a powerful, flexible deep learning platform that enables engineers and researchers to move quickly from research to production. We are highlighting some of the ways the AI engineering and research community is using PyTorch 1.0. We’re also sharing new details about the latest release, PyTorch 1.1, and showcasing some of the new development tools created by the community. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Building on the initial launch of PyTorch in 2017, we partnered with the AI community to ship the stable release of PyTorch 1.0 last December. Along with enhanced production-oriented capabilities and deep integration with leading cloud platforms, PyTorch 1.0 expands on the open source library’s core features, with the addition of PyTorch JIT (Just in time compilation) that seamlessly transitions between eager mode and graph mode to provide both flexibility and speed. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Leading businesses across industries are beginning to use PyTorch to both facilitate their research and then also deploy at large scale for applications such as translation, computer vision, conversational interfaces, pharmaceutical research, factory optimization, and automated driving research. Community adoption of PyTorch has also continued to expand. Stanford, UC Berkeley, Caltech, and other universities are using PyTorch as a fundamental tool for their machine learning (ML) courses; new ecosystem projects have launched to support development on PyTorch; and major cloud platforms have expanded their integration with PyTorch.
Using PyTorch across industries
Many leading businesses are moving to PyTorch 1.0 to accelerate development and deployment of new AI systems. Here are some examples:
Airbnb leveraged PyTorch's rich libraries and APIs for conversational AI and deployed a Smart Reply to help the company’s service agents respond more effectively to customers.
| https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
ATOM is building a platform to generate and optimize new drug candidates significantly faster and with greater success than conventional processes. Using machine learning frameworks such as PyTorch, ATOM was able to design a variational autoencoder for representing diverse chemical structures and designing new drug candidates.
Genentech is utilizing PyTorch’s flexible control structures and dynamic graphs to train deep learning models that will aid in the development of individualized cancer therapy.
Microsoft is using PyTorch across its organization to develop ML models at scale and deploy them via the ONNX Runtime. Using PyTorch, Microsoft Cognition has built distributed language models that scale to billions of words and are now in production in offerings such as Cognitive Services.
| https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Toyota Research Institute (TRI) is developing a two-pronged approach toward automated driving with Toyota Guardian and Toyota Chauffeur technologies. The Machine Learning Team at TRI is creating new deep learning algorithms to leverage Toyota's 10 million sales per year data advantage. The flexibility of PyTorch has vastly accelerated their pace of exploration and its new production features will enable faster deployment towards their safety critical applications.
Following the release of PyTorch 1.0 in December 2018, we’re now announcing the availability of v1.1, which improves performance, adds new model understanding and visualization tools to improve usability, and provides new APIs.
Key features of PyTorch v1.1 include: | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Key features of PyTorch v1.1 include:
TensorBoard: First-class and native support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs and graphs. PyTorch now natively supports TensorBoard with a simple “from torch.utils.tensorboard import SummaryWriter” command.
JIT compiler: Improvements to just-in-time (JIT) compilation. These include various bug fixes as well as expanded capabilities in TorchScript, such as support for dictionaries, user classes, and attributes.
New APIs: Support for Boolean tensors and better support for custom recurrent neural networks.
| https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Distributed Training: Improved performance for common models such as CNNs, added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP) and support for modules where not all parameters are used in every iteration (e.g. control flow, like adaptive softmax, etc). See the latest tutorials here.
We’ve also continued to partner with the community to foster projects and tools aimed at supporting ML engineers for needs ranging from improved model understanding to auto-tuning using AutoML methods. With the release of Ax and BoTorch (below), we will be sharing some of our core algorithms, including meta-learning for efficiently optimizing hyperparameters from based on historical tasks. We are excited to see this work open-sourced for the community to build on. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
This ecosystem includes open source projects and tools that have been deployed at production scale, as well as products and services from our partnership with industry leaders who share our vision of an open and collaborative AI community. Here are a few of the latest tools:
BoTorch: BoTorch is a research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.
Ax: Ax is an ML platform for managing adaptive experiments. It enables researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.
| https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
PyTorch-BigGraph: PBG is a distributed system for creating embeddings of very large graphs with billions of entities and trillions of edges. It includes support for sharding and negative sampling and it offers sample use cases based on Wikidata embeddings.
Google AI Platform Notebooks: AI Platform Notebooks is a new, hosted JupyterLab service from Google Cloud Platform. Data scientists can quickly create virtual machines running JupyterLab with the latest version of PyTorch preinstalled. It is also tightly integrated with GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory, making it easy to execute the full ML cycle without ever leaving JupyterLab.
We’re also excited to see many interesting new projects from the broader PyTorch community. Highlights include: | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
BigGAN-PyTorch:This is a full PyTorch reimplementation that uses gradient accumulation to provide the benefits of big batches on as few as four GPUs.
GeomLoss: A Python API that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It includes MMD, Wasserstein, Sinkhorn, and more.
PyTorch Geometric: A deep learning extension library for PyTorch that offers several methods for deep learning on graphs and other irregular structures (also known as geometric deep learning) from a variety of published papers.
| https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Curve-GCN: A real-time, interactive image annotation approach that uses an end-to-end-trained graph convolutional network (GCN). It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. Curve-GCN runs 10x faster than traditional methods, such as Polygon-RNN++.
Udacity, fast.ai, and others develop new PyTorch resources
PyTorch is ideal for teaching ML development because it enables rapid experimentation through its flexible, dynamic programming environment and user-friendly Pythonic interface. In addition, Google Colab now offers an interactive Jupyter Notebook environment that natively supports PyTorch, allowing developers to run any PyTorch tutorial immediately with free CPU and GPU resources. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
University-level classes — including Stanford NLP, UC Berkeley Computer Vision, and Caltech Robotics courses — are now being taught on PyTorch. In addition, massive open online courses (MOOCs) are training thousands of new PyTorch developers.
Today, we’re announcing a new Udacity course, building upon the Intro to Deep Learning course launched last year. This new course, led by Andrew Trask of Oxford University and OpenMined, covers important concepts around privacy in AI, including methods such as differential privacy and federated learning. Facebook will also be providing scholarships to support students as they continue their ML education in Udacity’s full Nanodegree programs. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
The fast.ai community is also continuing to invest energy and resources in PyTorch. In June, fast.ai will launch a new course called Deep Learning from the Foundations, which will show developers how to go all the way from writing matrix multiplication from scratch to how to train and implement a state-of-the-art ImageNet model. The course will include deep dives into the underlying implementation of methods in the PyTorch and fast.ai libraries, and will use the code to explain and illustrate the academic papers that underlie these methods. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
As part of the course, fast.ai will also release new software modules, including fastai.audio, which brings the power of fast.ai’s deep abstractions and curated algorithms to the new PyTorch.audio module, and show how fastai.vision can be used to create stunning high-resolution videos from material such as old classic movies, and from cutting-edge microscopy sequences through a collaboration with the Salk Institute. In addition, fast.ai is contributing its new X-ResNet module, including a suite of models pretrained on ImageNet.
Getting started with PyTorch | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
Getting started with PyTorch
Everyone in the AI community — including those new to ML development as well as researchers and engineers looking for ways to accelerate their end-to-end workflows — can experiment with PyTorch instantly by visiting pytorch.org and launching a tutorial in Colab. There are also many easy ways to get started both locally and on popular cloud platforms. | https://pytorch.org/blog/pytorch-adds-new-dev-tools/ | pytorch blogs |
layout: blog_detail
title: "Introducing Accelerated PyTorch Training on Mac"
author: PyTorch
featured-img: "/assets/images/METAPT-002-BarGraph-02-static.png"
In collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.
Metal Acceleration | https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ | pytorch blogs |
Metal Acceleration
Accelerated GPU training is enabled using Apple’s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS.
Training Benefits on Apple Silicon
Every Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance. | https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ | pytorch blogs |
In the graphs below, you can see the performance speedup from accelerated GPU training and evaluation compared to the CPU baseline:
Accelerated GPU training and evaluation speedups over CPU-only (times faster)
Getting Started
To get started, just install the latest Preview (Nightly) build on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python.
You can also learn more about Metal and MPS on Apple’s Metal page. | https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ | pytorch blogs |
* Testing conducted by Apple in April 2022 using production Mac Studio systems with Apple M1 Ultra, 20-core CPU, 64-core GPU 128GB of RAM, and 2TB SSD. Tested with macOS Monterey 12.3, prerelease PyTorch 1.12, ResNet50 (batch size=128), HuggingFace BERT (batch size=64), and VGG16 (batch size=64). Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac Studio. | https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ | pytorch blogs |
layout: blog_detail
title: "Accelerating Hugging Face and TIMM models with PyTorch 2.0"
author: Mark Saroufim
featured-img: "assets/images/pytorch-2.0-feature-img.png"
torch.compile() makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator torch.compile(). It works either directly over an nn.Module as a drop-in replacement for torch.jit.script() but without requiring you to make any source code changes. We expect this one line code change to provide you with between 30%-2x training time speedups on the vast majority of models that you’re already running.
opt_module = torch.compile(module)
torch.compile supports arbitrary PyTorch code, control flow, mutation and comes with experimental support for dynamic shapes. We’re so excited about this development that we call it PyTorch 2.0. | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
What makes this announcement different for us is we’ve already benchmarked some of the most popular open source PyTorch models and gotten substantial speedups ranging from 30% to 2x https://github.com/pytorch/torchdynamo/issues/681.
There are no tricks here, we’ve pip installed popular libraries like https://github.com/huggingface/transformers, https://github.com/huggingface/accelerate and https://github.com/rwightman/pytorch-image-models and then ran torch.compile() on them and that’s it.
It’s rare to get both performance and convenience, but this is why the core team finds PyTorch 2.0 so exciting. The Hugging Face team is also excited, in their words:
Ross Wightman the primary maintainer of TIMM: “PT 2.0 works out of the box with majority of timm models for inference and train workloads and no code changes” | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
Sylvain Gugger the primary maintainer of transformers and accelerate: "With just one line of code to add, PyTorch 2.0 gives a speedup between 1.5x and 2.x in training Transformers models. This is the most exciting thing since mixed precision training was introduced!"
This tutorial will show you exactly how to replicate those speedups so you can be as excited as to PyTorch 2.0 as we are.
Requirements and Setup
For GPU (newer generation GPUs will see drastically better performance)
pip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
For CPU
pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
Optional: Verify Installation
git clone https://github.com/pytorch/pytorch
cd tools/dynamo
python verify_dynamo.py
Optional: Docker installation
We also provide all the required dependencies in the PyTorch nightly
binaries which you can download with
``` | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
binaries which you can download with
docker pull ghcr.io/pytorch/pytorch-nightly
And for ad hoc experiments just make sure that your container has access
to all your GPUs
docker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash
Getting started
a toy exmaple
Let’s start with a simple example and make things more complicated step
by step. Please note that you’re likely to see more significant speedups the newer your GPU is.
import torch
def fn(x, y):
a = torch.sin(x).cuda()
b = torch.sin(y).cuda()
return a + b
new_fn = torch.compile(fn, backend="inductor")
input_tensor = torch.randn(10000).to(device="cuda:0")
a = new_fn(input_tensor, input_tensor)
This example won’t actually run faster but it’s educational. | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
example that features torch.cos() and torch.sin() which are examples of pointwise ops as in they operate element by element on a vector. A more famous pointwise op you might actually want to use would be something like torch.relu().
Pointwise ops in eager mode are suboptimal because each one would need to read a tensor from memory, make some changes and then write back those changes.
The single most important optimization that PyTorch 2.0 does for you is fusion.
So back to our example we can turn 2 reads and 2 writes into 1 read and 1 write which is crucial especially for newer GPUs where the bottleneck is memory bandwidth (how quickly you can send data to a GPU) instead of compute (how quickly your GPU can crunch floating point operations)
The second most important optimization that PyTorch 2.0 does for you is CUDA graphs
CUDA graphs help eliminate the overhead from launching individual kernels from a python program. | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
torch.compile() supports many different backends but one that we’re particularly excited about is Inductor which generates Triton kernels https://github.com/openai/triton which are written in Python yet outperform the vast majority of handwritten CUDA kernels. Suppose our example above was called trig.py we can actually inspect the code generated triton kernels by running.
TORCH_COMPILE_DEBUG=1 python trig.py
```python
@pointwise(size_hints=[16384], filename=file, meta={'signature': {0: 'fp32', 1: 'fp32', 2: 'i32'}, 'device': 0, 'constants': {}, 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})
@triton.jit
def kernel(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 10000
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.reshape(tl.arange(0, XBLOCK), [XBLOCK])
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.load(in_ptr0 + (x0), xmask)
tmp1 = tl.sin(tmp0)
tmp2 = tl.sin(tmp1) | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
tmp1 = tl.sin(tmp0)
tmp2 = tl.sin(tmp1)
tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)
And you can verify that fusing the two `sins` did actually occur because the two `sin` operations occur within a single Triton kernel and the temporary variables are held in registers with very fast access.
### a real model
As a next step let’s try a real model like resnet50 from the PyTorch hub.
```python
import torch
model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)
opt_model = torch.compile(model, backend="inductor")
model(torch.randn(1,3,64,64))
If you actually run you may be surprised that the first run is slow and that’s because the model is being compiled. Subsequent runs will be faster so it's common practice to warm up your model before you start benchmarking it. | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
You may have noticed how we also passed in the name of a compiler explicitly here with “inductor” but it’s not the only available backend, you can run in a REPL torch._dynamo.list_backends() to see the full list of available backends. For fun you should try out aot_cudagraphs or nvfuser.
Hugging Face models
Let’s do something a bit more interesting now, our community frequently
uses pretrained models from transformers https://github.com/huggingface/transformers or TIMM https://github.com/rwightman/pytorch-image-models and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.
So we’re going to directly download a pretrained model from the Hugging Face hub and optimize it
```python
import torch
from transformers import BertTokenizer, BertModel | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
from transformers import BertTokenizer, BertModel
Copy pasted from here https://huggingface.co/bert-base-uncased
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased").to(device="cuda:0")
model = torch.compile(model) # This is the only line of code that we changed
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt').to(device="cuda:0")
output = model(**encoded_input)
```
If you remove the to(device="cuda:0") from the model and encoded_input then PyTorch 2.0 will generate C++ kernels that will be optimized for running on your CPU. You can inspect both Triton or C++ kernels for BERT, they’re obviously more complex than the trigonometry example we had above but you can similarly skim it and understand if you understand PyTorch.
The same code also works just fine if used with https://github.com/huggingface/accelerate and DDP | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
Similarly let’s try out a TIMM example
import timm
import torch
model = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)
opt_model = torch.compile(model, backend="inductor")
opt_model(torch.randn(64,3,7,7))
Our goal with PyTorch was to build a breadth-first compiler that would speed up the vast majority of actual models people run in open source. The Hugging Face Hub ended up being an extremely valuable benchmarking tool for us, ensuring that any optimization we work on actually helps accelerate models people want to run.
So please try out PyTorch 2.0, enjoy the free perf and if you’re not seeing it then please open an issue and we will make sure your model is supported https://github.com/pytorch/torchdynamo/issues
After all, we can’t claim we’re created a breadth-first unless YOUR models actually run faster. | https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/ | pytorch blogs |
layout: blog_detail
title: 'PyTorch 1.6 now includes Stochastic Weight Averaging'
author: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair
Do you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it’s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. Again and again, researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!
SWA has a wide range of applications and features: | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
SWA has a wide range of applications and features:
* SWA significantly improves performance compared to standard training techniques in computer vision (e.g., VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]).
* SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].
* SWA was shown to improve performance in language modeling (e.g., AWD-LSTM on WikiText-2 [4]) and policy-gradient methods in deep reinforcement learning [3].
* SWAG, an extension of SWA, can approximate Bayesian model averaging in Bayesian deep learning and achieves state-of-the-art uncertainty calibration results in various settings. Moreover, its recent generalization MultiSWAG provides significant additional performance gains and mitigates double-descent [4, 10]. Another approach, Subspace Inference, approximates the Bayesian posterior in a small subspace of the parameter space around the SWA solution [5]. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
SWA for low precision training, SWALP, can match the performance of full-precision SGD training, even with all numbers quantized down to 8 bits, including gradient accumulators [6].
SWA in parallel, SWAP, was shown to greatly speed up the training of neural networks by using large batch sizes and, in particular, set a record by training a neural network to 94% accuracy on CIFAR-10 in 27 seconds [11].
Figure 1. Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. Left: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). Middle and Right: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
In short, SWA performs an equal average of the weights traversed by SGD (or any stochastic optimizer) with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1). We emphasize that SWA can be used with any optimizer, such as Adam, and is not specific to SGD.
Previously, SWA was in PyTorch contrib. In PyTorch 1.6, we provide a new convenient implementation of SWA in torch.optim.swa_utils.
Is this just Averaged SGD? | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Is this just Averaged SGD?
At a high level, averaging SGD iterates dates back several decades in convex optimization [7, 8], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. But the details matter. Averaged SGD is often used in conjunction with a decaying learning rate, and an exponential moving average (EMA), typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates but does not perform very differently.
By contrast, SWA uses an equal average of SGD iterates with a modified cyclical or high constant learning rate and exploits the flatness of training objectives [8] specific to deep learning for improved generalization.
How does Stochastic Weight Averaging Work? | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
How does Stochastic Weight Averaging Work?
There are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD (or other optimizers such as Adam) continues to bounce around the optimum and explore diverse models instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see Figure 2 below). The second ingredient is to take an average of the weights (typically an equal average) of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained at the end of every epoch within the last 25% of training time (see Figure 2). After training is complete, we then set the weights of the network to the computed SWA averages.
| https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Figure 2. Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training.
One important detail is the batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training. So the batch normalization layers do not have the activation statistics computed at the end of training. We can compute these statistics by doing a single forward pass on the train data with the SWA model.
While we focus on SGD for simplicity in the description above, SWA can be combined with any optimizer. You can also use cyclical learning rates instead of a high constant value (see e.g., [2]).
How to use SWA in PyTorch? | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
How to use SWA in PyTorch?
In torch.optim.swa_utils we implement all the SWA ingredients to make it convenient to use SWA with any model. In particular, we implement AveragedModel class for SWA models, SWALR learning rate scheduler, and update_bn utility function to update SWA batch normalization statistics at the end of training.
In the example below, swa_model is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs, and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160.
```python
from torch.optim.swa_utils import AveragedModel, SWALR
from torch.optim.lr_scheduler import CosineAnnealingLR
loader, optimizer, model, loss_fn = ...
swa_model = AveragedModel(model)
scheduler = CosineAnnealingLR(optimizer, T_max=100)
swa_start = 5
swa_scheduler = SWALR(optimizer, swa_lr=0.05)
for epoch in range(100):
for input, target in loader:
optimizer.zero_grad() | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
optimizer.zero_grad()
loss_fn(model(input), target).backward()
optimizer.step()
if epoch > swa_start:
swa_model.update_parameters(model)
swa_scheduler.step()
else:
scheduler.step()
Update bn statistics for the swa_model at the end
torch.optim.swa_utils.update_bn(loader, swa_model)
Use swa_model to make predictions on test data
preds = swa_model(test_input)
```
Next, we explain each component of torch.optim.swa_utils in detail.
AveragedModel class serves to compute the weights of the SWA model. You can create an averaged model by running swa_model = AveragedModel(model). You can then update the parameters of the averaged model by swa_model.update_parameters(model). By default, AveragedModel computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the avg_fn parameter. In the following example, ema_model computes an exponential moving average.
```python | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
ema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\
0.1 * averaged_model_parameter + 0.9 * model_parameter
ema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)
In practice, we find an equal average with the modified learning rate schedule in Figure 2 provides the best performance.
SWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to 0.05 in 5 epochs within each parameter group.
swa_scheduler = torch.optim.swa_utils.SWALR(optimizer,
anneal_strategy="linear", anneal_epochs=5, swa_lr=0.05)
We also implement cosine annealing to a fixed value (anneal_strategy="cos"). In practice, we typically switch to SWALR at epoch swa_start (e.g. after 75% of the training epochs), and simultaneously start to compute the running averages of the weights:
```python | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100)
swa_start = 75
for epoch in range(100):
# <train epoch>
if i > swa_start:
swa_model.update_parameters(model)
swa_scheduler.step()
else:
scheduler.step()
Finally, update_bn is a utility function that computes the batchnorm statistics for the SWA model on a given dataloader loader:
torch.optim.swa_utils.update_bn(loader, swa_model)
update_bn applies the swa_model to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model.
Once you computed the SWA averages and updated the batch normalization layers, you can apply swa_model to make predictions on test data.
Why does it work? | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Why does it work?
There are large flat regions of the loss surface [9]. In Figure 3 below, we show a visualization of the loss surface in a subspace of the parameter space containing a path connecting two independently trained SGD solutions, such that the loss is similarly low at every point along the path. SGD converges near the boundary of these regions because there isn’t much gradient signal to move inside, as the points in the region all have similarly low values of loss. By increasing the learning rate, SWA spins around this flat region, and then by averaging the iterates, moves towards the center of the flat region.
| https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Figure 3: visualization of mode connectivity for ResNet-20 with no skip connections on CIFAR-10 dataset. The visualization is created in collaboration with Javier Ideami (https://losslandscape.com/). For more details, see this blogpost.
We expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below, we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while the SWA solution has a higher train loss compared to the SGD solution, it is centered in a region of low loss and has a substantially better test error.
| https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Figure 4. Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). The SWA solution is centered in a wide region of low train loss, while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, the SWA solution leads to much better generalization.
What are the results achieved with SWA?
We release a GitHub repo with examples using the PyTorch implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:
{:.table.table-striped.table-bordered}
| | VGG-16 | ResNet-164 | WideResNet-28x10 |
| ------------- | ------------- | ------------- | ------------- |
| SGD | 72.8 ± 0.3 | 78.4 ± 0.3 | 81.0 ± 0.3 |
| SWA | 74.4 ± 0.3 | 79.8 ± 0.4 | 82.5 ± 0.2 | | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
| SWA | 74.4 ± 0.3 | 79.8 ± 0.4 | 82.5 ± 0.2 |
Semi-Supervised Learning
In a follow-up paper SWA was applied to semi-supervised learning, where it improved the best reported results in multiple settings [2]. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.
Figure 5. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.
Reinforcement Learning | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Reinforcement Learning
In another follow-up paper SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments [3]. This application is also an instance of where SWA is used with Adam. Recall that SWA is not specific to SGD and can benefit essentially any optimizer.
{:.table.table-striped.table-bordered}
| Environment Name | A2C | A2C + SWA |
| ------------- | ------------- | ------------- |
| Breakout | 522 ± 34 | 703 ± 60 |
| Qbert | 18777 ± 778 | 21272 ± 655 |
| SpaceInvaders | 7727 ± 1121 | 21676 ± 8897 |
| Seaquest | 1779 ± 4 | 1795 ± 4 |
| BeamRider | 9999 ± 402 | 11321 ± 1065 |
| CrazyClimber | 147030 ± 10239 | 139752 ± 11618 |
Low Precision Training | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Low Precision Training
We can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 9 and 10). Recent work shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Figure 9. Quantizing a solution leads to a perturbation of the weights which has a greater effect on the quality of the sharp solution (left) compared to wide solution (right).
Figure 10. The difference between standard low precision training and SWALP.
Another work, SQWA, presents an approach for quantization and fine-tuning of neural networks in low precision [12]. In particular, SQWA achieved state-of-the-art results for DNNs quantized to 2 bits on CIFAR-100 and ImageNet.
Calibration and Uncertainty Estimates
By finding a centred solution in the loss, SWA can also improve calibration and uncertainty representation. Indeed, SWA can be viewed as an approximation to an ensemble, resembling a Bayesian model average, but with a single model [1]. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
SWA can be viewed as taking the first moment of SGD iterates with a modified learning rate schedule. We can directly generalize SWA by also taking the second moment of iterates to form a Gaussian approximate posterior over the weights, further characterizing the loss geometry with SGD iterates. This approach,SWA-Gaussian (SWAG) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning [4]. The SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution and the posterior log-density for ResNet-20 on CIFAR-10.
| https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Figure 6. SWAG posterior approximation and the loss surface for a ResNet-20 without skip-connections trained on CIFAR-10 in the subspace formed by the two largest eigenvalues of the SWAG covariance matrix. The shape of SWAG distribution is aligned with the posterior: the peaks of the two distributions coincide, and both distributions are wider in one direction than in the orthogonal direction. Visualization created in collaboration with Javier Ideami.
Empirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available here.
| https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Figure 7. MultiSWAG generalizes SWAG and deep ensembles, to perform Bayesian model averaging over multiple basins of attraction, leading to significantly improved performance. By contrast, as shown here, deep ensembles select different modes, while standard variational inference (VI) marginalizes (model averages) within a single basin.
MultiSWAG [9] uses multiple independent SWAG models to form a mixture of Gaussians as an approximate posterior distribution. Different basins of attraction contain highly complementary explanations of the data. Accordingly, marginalizing over these multiple basins provides a significant boost in accuracy and uncertainty representation. MultiSWAG can be viewed as a generalization of deep ensembles, but with performance improvements. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Indeed, we see in Figure 8 that MultiSWAG entirely mitigates double descent -- more flexible models have monotonically improving performance -- and provides significantly improved generalization over SGD. For example, when the ResNet-18 has layers of width 20, Multi-SWAG achieves under 30% error whereas SGD achieves over 45%, more than a 15% gap!
Figure 8. SGD, SWAG, and Multi-SWAG on CIFAR-100 for a ResNet-18 with varying widths. We see Multi-SWAG in particular mitigates double descent and provides significant accuracy improvements over SGD.
Reference [10] also considers Multi-SWA, which uses multiple independently trained SWA solutions in an ensemble, providing performance improvements over deep ensembles without any additional computational cost. Code for MultiSWA and MultiSWAG is available here. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Another method, Subspace Inference, constructs a low-dimensional subspace around the SWA solution and marginalizes the weights in this subspace to approximate the Bayesian model average [5]. Subspace Inference uses the statistics from the SGD iterates to construct both the SWA solution and the subspace. The method achieves strong performance in terms of prediction accuracy and uncertainty calibration both in classification and regression problems. Code is available here.
Try it Out! | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
Try it Out!
One of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We have presented SWA, a simple drop-in replacement for standard optimizers such as SGD and Adam, which can in principle, benefit anyone training a deep neural network. SWA has been demonstrated to have a strong performance in several areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
We encourage you to try out SWA! SWA is now as easy as any standard training in PyTorch. And even if you have already trained your model, you can use SWA to significantly improve performance by running it for a small number of epochs from a pre-trained model.
[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018.
[2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson;
International Conference on Learning Representations (ICLR), 2019.
[3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin,
Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson; UAI 2018 Workshop: Uncertainty in Deep Learning, 2018. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
[4] A Simple Baseline for Bayesian Uncertainty in Deep Learning
Wesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson; Neural Information Processing Systems (NeurIPS), 2019.
[5] Subspace Inference for Bayesian Deep Learning
Pavel Izmailov, Wesley Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson
Uncertainty in Artificial Intelligence (UAI), 2019.
[6] SWALP : Stochastic Weight Averaging in Low Precision Training
Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai,
Andrew Gordon Wilson, Christopher De Sa; International Conference on Machine Learning (ICML), 2019.
[7] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process; Technical report, Cornell University Operations Research and Industrial Engineering, 1988.
[8] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky; SIAM Journal on Control and Optimization, 30(4):838–855, 1992. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
[9] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov,
Andrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018.
[10] Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Andrew Gordon Wilson, Pavel Izmailov. ArXiv preprint, 2020.
[11] Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well
Gupta, Vipul, Santiago Akle Serrano, and Dennis DeCoste; International Conference on Learning Representations (ICLR). 2019.
[12] SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Shin, Sungho, Yoonho Boo, and Wonyong Sung; arXiv preprint 2020. | https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ | pytorch blogs |
layout: blog_detail
title: "Introducing TorchRec, and other domain library updates in PyTorch 1.11"
author: Team PyTorch
featured-img: "assets/images/pytorch-logo.jpg"
We are introducing the beta release of TorchRec and a number of improvements to the current PyTorch domain libraries, alongside the PyTorch 1.11 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Highlights include:
TorchRec, a PyTorch domain library for Recommendation Systems, is available in beta. View it on GitHub.
TorchAudio - Added Enformer- and RNN-T-based models and recipes to support the full development lifecycle of a streaming ASR model. See the release notes here.
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
TorchText - Added beta support for RoBERTa and XLM-R models, byte-level BPE tokenizer, and text datasets backed by TorchData. See the release notes here.
TorchVision - Added 4 new model families and 14 new classification datasets such as CLEVR, GTSRB, FER2013. See the release notes here.
TorchRec 0.1
We announced TorchRec a few weeks ago and we are excited to release the beta version today. To recap, TorchRec is a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. TorchRec was used to train a 1.25 trillion parameter model, pushed to production in January 2022.
In particular, the library includes: | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
In particular, the library includes:
Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.
Optimized RecSys kernels powered by FBGEMM, including support for sparse and quantized operations.
A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.
A planner which can automatically generate optimized sharding plans for models.
Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.
GPU inference support.
Common modules for RecSys, such as models and public datasets (Criteo & Movielens).
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Please check the TorchRec announcement post here, video tutorial, install instructions here, test drive the feature through this tutorial here, and refer to the reference document here.
TorchAudio 0.11
TorchAudio: Building Blocks for Audio and Speech Processing
We published a paper, TorchAudio: Building Blocks for Audio and Speech Processing, describing the overview of the TorchAudio library. If you find TorchAudio useful for your research, please help us share with the community by citing our paper.
(Beta) RNN-T & (Prototype) Emformer Models and Recipes
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Emformer is an efficient memory-transformer-based streaming acoustic model that has demonstrated state-of-the-art streaming automatic speech recognition (ASR) performance in low-latency, resource-constrained scenarios, such as on-device applications (citation: https://arxiv.org/abs/2010.10759).
The TorchAudio v0.11 release includes the following beta features:
Implementation of Emformer (docs)
Recurrent neural network transducer (RNN-T) streaming ASR model that uses Emformer for its transcription network (docs)
RNN-T beam search decoder with TorchScript support (docs)
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
LibriSpeech Emformer RNN-T training recipe (GitHub) and corresponding pre-trained streaming ASR inference pipeline (docs)
Also there are prototype features that are available from nightly builds or the main branch.
Training recipes trained on MuST-C and TED-LIUM3 datasets. (GitHub)
Pre-trained pipelines corresponding to the recipes. (docs)
Tutorial that steps through performing online speech recognition with RNN-T Emformer model. (docs)
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Collectively, these features cover the full development lifecycle of a streaming ASR model, from definition through training and inference, and enable users to easily develop their own Emformer- and RNN-T-based models.
Special thanks to Yangyang Shi, Jay Mahadeokar, and Gil Keren for their code contributions and guidance.
(Beta) HuBERT Pretrain Model | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
(Beta) HuBERT Pretrain Model
The masked prediction training of HuBERT model requires the masked logits, unmasked logits, and feature norm as the outputs. The logits are for cross-entropy losses and the feature norm is for penalty loss. The release adds HuBERTPretrainModel and corresponding factory functions (hubert_pretrain_base, hubert_pretrain_large, and hubert_pretrain_xlarge) to enable training from scratch.
(Prototype) CTC Beam Search Decoder | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
(Prototype) CTC Beam Search Decoder
In recent releases, TorchAudio has added support for ASR models fine-tuned on CTC loss. The addition of an inference time CTC beam search decoder enables running end-to-end ASR evaluation using TorchAudio utils.
The CTC decoder in TorchAudio supports customizable beam search decoding with lexicon constraint. It also has optional KenLM language model support.
For more details, please check out the API tutorial and documentation. This prototype feature is available through nightly builds.
(Prototype) Streaming API
TorchAudio started as simple audio I/O APIs that supplement PyTorch. With the recent addition of ASR models and training recipes, the project has received requests to support high-level application development. | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Streaming API makes it easy to develop and test the model in online inference. It utilizes ffmpeg under the hood, and enables reading media from online services and hardware devices, decoding media in an incremental manner, and applying filters and preprocessing.
Please checkout the API tutorial and the documentation. There are also the streaming ASR tutorial and the device streaming ASR tutorial. This feature is available from nightly releases. Please refer to pytorch.org for how to install nightly builds.
TorchText 0.12
(Beta) RoBERTa and XLM-R Models | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
(Beta) RoBERTa and XLM-R Models
TorchText has added support for pre-trained RoBERTa and XLM-R models. It would allow users to train end-2-end Transformer Encoder based models on standard NLP tasks using TorchText.
More specifically:
The models are torchscriptable and hence can be employed for production use-cases.
The model APIs let users to easily attach custom task-specific heads with pre-trained encoders.
The API also comes equipped with data pre-processing transforms to match the pre-trained weights and model configuration.
We have added a tutorial to demonstrate SST-2 binary text classification task with pre-trained XLM-R base architecture.
For additional details on model APIs and usage examples, please refer to the documentation.
(Beta) byte-level BPE tokenizer | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
(Beta) byte-level BPE tokenizer
TorchText has added support for a Byte-Level BPE tokenizer, as used in GPT-2. This tokenizer is also used for tokenizing inputs to the pre-trained RoBERTa models described previously. In addition to the RoBERTa vocab, users can also load their own custom BPE vocab to use the tokenizer. Furthermore, the tokenizer is fully torchscriptable and hence can be employed for production use-cases. For additional details on model APIs and usage examples, please refer to the documentation.
(Beta) Text datasets backed by TorchData
TorchText has modernized its datasets by migrating from older-style Iterable Datasets to TorchData’s DataPipes. TorchData is a library that provides modular/composable primitives, allowing users to load and transform data in performant data pipelines. | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
These DataPipes work out-of-the-box with PyTorch DataLoader and would enable new functionalities like auto-sharding. Users can now easily do data manipulation and pre-processing using user-defined functions and transformations in a functional style programming. Datasets backed by DataPipes also enable standard flow-control like batching, collation, shuffling and bucketizing.
Collectively, DataPipes provides a comprehensive experience for data preprocessing and tensorization needs in a pythonic and flexible way for model training. We have added a tutorial to demonstrate data-processing pipelining using the modernized dataset for binary text-classification.
You can learn more about TorchData DataPipe APIs in its official documentation.
TorchVision 0.12
New Models
Four new model families have been released in the latest version along with pre-trained weights for their variants. | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
#1 Object Detection
FCOS is a popular, fully convolutional, anchor-free model for object detection. In this release we include a community-contributed model implementation as well as pre-trained weights. The model was trained on COCO train2017 and can be used as follows:
import torch
from torchvision import models
x = [torch.rand(3, 224, 224)]
fcos = models.detection.fcos_resnet50_fpn(pretrained=True).eval()
predictions = fcos(x)
The box AP of the pre-trained model on COCO val2017 is 39.2 (see #4961 for more details). | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
We would like to thank Hu Ye and Zhiqiang Wang for contributing to the model implementation and initial training. This was the first community-contributed model in a long while, and given its success, we decided to use the learnings from this process and create a new model contribution guidelines.
#2 Optical Flow support and RAFT model
TorchVision now supports optical flow! Optical Flow models try to predict movement in a video: given two consecutive frames, the model predicts where each pixel of the first frame ends up in the second frame. Check out our new tutorial on Optical Flow! | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
We implemented a torchscript-compatible RAFT model with pre-trained weights (both normal and “small” versions), and added support for training and evaluating optical flow models. Our training scripts support distributed training across processes and nodes, leading to much faster training time than the original implementation. We also added 5 new optical flow datasets: Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.
#3. Image Classification | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
#3. Image Classification
Vision Transformer (ViT) and ConvNeXt are two popular architectures which can be used as image classifiers or as backbones for downstream vision tasks. In this release we include 8 pre-trained weights for their classification variants. The models were trained on ImageNet and can be used as follows:
import torch
from torchvision import models
x = torch.rand(1, 3, 224, 224)
vit = models.vit_b_16(pretrained=True).eval()
convnext = models.convnext_tiny(pretrained=True).eval()
predictions1 = vit(x)
predictions2 = convnext(x)
The accuracies of the pre-trained models obtained on ImageNet val are seen below:
Model
Acc@1
Acc@5
vit_b_16
81.072
95.318
vit_b_32
75.912
92.466
vit_l_16
79.662
94.638
vit_l_32
76.972
93.07
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
| vit_l_32 | 76.972 | 93.07 |
| convnext_tiny | 82.52 | 96.146 |
| convnext_small | 83.616 | 96.65 |
| convnext_base | 84.062 | 96.87 |
| convnext_large | 84.414 | 96.976 |
The above models have been trained using an adjusted version of our new training recipe and this allows us to offer models with accuracies significantly higher than the ones on the original papers.
#4. GPU Video Decoding
In this release, we add support for GPU video decoding in the video reading API. To use hardware-accelerated decoding, we just need to pass a cuda device to the video reading API as shown below:
import torchvision
reader = torchvision.io.VideoReader(file_name, device="cuda:0")
for frame in reader:
print(frame)
We also support seeking to anyframe or a keyframe in the video before reading, as shown below:
reader.seek(seek_time)
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
reader.seek(seek_time)
New Datasets
We have implemented 14 new classification datasets: CLEVR, GTSRB, FER2013, SUN397, Country211, Flowers102, fvgc_aircraft, OxfordIIITPet, DTD, Food 101, Rendered SST2, Stanford cars, PCAM, and EuroSAT.
As part of our work on Optical Flow support (see above for more details), we also added 5 new optical flow datasets: Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.
Other Updates
New documentation layout: Each function / class is now documented in a separate page, clearing up some space in the per-module pages, and easing the discovery of the proposed APIs. Compare e.g. our previous docs vs the new ones. Please let us know if you have any feedback!
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
New model contribution guidelines have been published following the success of the FCOS model which was contributed by the community. These guidelines aim to be an overview of the model contribution process for anyone who would like to suggest, implement and train a new model.
Upcoming Prototype API - We are currently working on a prototype API which adds Multi-weight support on all of our model builder methods. This will enable us to offer multiple pre-trained weights, associated with their meta-data and inference transforms. The API is still under review and thus was not included in the release but you can read more about it on our blogpost and provide your feedback on the dedicated Github issue.
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Changes in our deprecation policy - Up until now, torchvision would almost never remove deprecated APIs. In order to be more aligned and consistent with pytorch core, we are updating our deprecation policy. We are now following a 2-release deprecation cycle: deprecated APIs will raise a warning for 2 versions, and will be removed after that. To reflect these changes and to smooth the transition, we have decided to:
Remove all APIs that had been deprecated before or on v0.8, released 1.5 years ago.
Update the removal timeline of all other deprecated APIs to v0.14, to reflect the new 2-cycle policy starting now in v0.12.
Captum 0.5 | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Captum 0.5
Captum is a PyTorch library for model interpretability. For this release, we expanded Captum with influential instances and added support for both similarity based influences and novel algorithms, TracIn and its variants. TracIn variants offer faster approximation of influence scores based on random projections for fully connected layers.
More specifically the new, influence, subsection of Captum includes:
SimilarityInfluence computes similarity scores between test and training examples using default (cosine or euclidean) or custom user definite metrics w.r.t. given input model layers.
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
TracInCP approximates the influential score of each training example on a given test example based on the dot-product similarity between loss gradients w.r.t. model parameters for test and training examples. Note that if we use training examples as test examples then we compute self influence. This method and its variants described below also return top-k proponents and opponents which are the top-k largest positive and negative influential examples respectively.
TracInCPFast is an approximation of TracInCP that avoids computing the gradients w.r.t. large parameter matrices. It approximates influence score based on the dot products between last fully connected layer activations and loss gradients w.r.t. that layer for training and test examples.
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
TracInCPFastRandProj uses a nearest neighbor approximation library such as annoy to compute the dot product between the training and test quantities. In order to reduce the dimensionality of layer activations and corresponding gradients this method, in addition, allows to project those vectors into a lower dimensional space using random projection matrices.
More about the implementation of influential instances can be found on our GitHub page and tutorials. | https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
Thanks for reading, If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn.
Cheers!
Team PyTorch
TorchRec 0.1
TorchAudio 0.11
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
TorchText 0.12
<li>
<a class="reference internal title-link has-children" href="#torchvision-012">TorchVision 0.12</a>
</li>
</ul>
</div>
</div>
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
color: #6c6c6d;
font-weight: 400;
}
| https://pytorch.org/blog/pytorch-1.11-new-library-releases/ | pytorch blogs |
layout: blog_detail
title: "PyTorch 2.0 & XLA—The Latest Cutting Edge Features"
author: Jack Cao, Milad Mohammadi, Alex Wertheim, Yeounoh Chung, Joe Spisak, Will Cromar, Shauheen Zahirazami
| https://pytorch.org/blog/pytorch-2.0-xla/ | pytorch blogs |