text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.narrow_copy
torch.narrow_copy(input, dim, start, length, *, out=None) -> Tensor
Same as "Tensor.narrow()" except this returns a copy rather than
shared storage. This is primarily for sparse tensors, which do not
have a shared-storage narrow method.
Parameters:
* input (Tensor) -- the tensor to narrow
* **dim** (*int*) -- the dimension along which to narrow
* **start** (*int*) -- index of the element to start the
narrowed dimension from. Can be negative, which means indexing
from the end of *dim*
* **length** (*int*) -- length of the narrowed dimension, must
be weakly positive
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> torch.narrow_copy(x, 0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> torch.narrow_copy(x, 1, 1, 2)
tensor([[ 2, 3],
[ 5, 6],
| https://pytorch.org/docs/stable/generated/torch.narrow_copy.html | pytorch docs |
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]])
>>> s = torch.arange(16).reshape(2, 2, 2, 2).to_sparse(2)
>>> torch.narrow_copy(s, 0, 0, 1)
tensor(indices=tensor([[0, 0],
[0, 1]]),
values=tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]]),
size=(1, 2, 2, 2), nnz=2, layout=torch.sparse_coo)
See also: "torch.narrow()" for a non copy variant | https://pytorch.org/docs/stable/generated/torch.narrow_copy.html | pytorch docs |
torch.Tensor.logical_not
Tensor.logical_not() -> Tensor
See "torch.logical_not()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_not.html | pytorch docs |
torch.nn.utils.parametrizations.spectral_norm
torch.nn.utils.parametrizations.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)
Applies spectral normalization to a parameter in the given module.
\mathbf{W}_{SN} = \dfrac{\mathbf{W}}{\sigma(\mathbf{W})},
\sigma(\mathbf{W}) = \max_{\mathbf{h}: \mathbf{h} \ne 0}
\dfrac{\|\mathbf{W} \mathbf{h}\|_2}{\|\mathbf{h}\|_2}
When applied on a vector, it simplifies to
\mathbf{x}_{SN} = \dfrac{\mathbf{x}}{\|\mathbf{x}\|_2}
Spectral normalization stabilizes the training of discriminators
(critics) in Generative Adversarial Networks (GANs) by reducing the
Lipschitz constant of the model. \sigma is approximated performing
one iteration of the power method every time the weight is
accessed. If the dimension of the weight tensor is greater than 2,
it is reshaped to 2D in power iteration method to get spectral
norm. | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html | pytorch docs |
norm.
See Spectral Normalization for Generative Adversarial Networks .
Note:
This function is implemented using the parametrization
functionality in "register_parametrization()". It is a
reimplementation of "torch.nn.utils.spectral_norm()".
Note:
When this constraint is registered, the singular vectors
associated to the largest singular value are estimated rather
than sampled at random. These are then updated performing
"n_power_iterations" of the power method whenever the tensor is
accessed with the module on *training* mode.
Note:
If the *_SpectralNorm* module, i.e.,
*module.parametrization.weight[idx]*, is in training mode on
removal, it will perform another power iteration. If you'd like
to avoid this iteration, set the module to eval mode before its
removal.
Parameters:
* module (nn.Module) -- containing module
* **name** (*str**, **optional*) -- name of weight parameter.
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html | pytorch docs |
Default: ""weight"".
* **n_power_iterations** (*int**, **optional*) -- number of
power iterations to calculate spectral norm. Default: "1".
* **eps** (*float**, **optional*) -- epsilon for numerical
stability in calculating norms. Default: "1e-12".
* **dim** (*int**, **optional*) -- dimension corresponding to
number of outputs. Default: "0", except for modules that are
instances of ConvTranspose{1,2,3}d, when it is "1"
Returns:
The original module with a new parametrization registered to the
specified weight
Return type:
Module
Example:
>>> snm = spectral_norm(nn.Linear(20, 40))
>>> snm
ParametrizedLinear(
in_features=20, out_features=40, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): _SpectralNorm()
)
)
)
>>> torch.linalg.matrix_norm(snm.weight, 2)
tensor(1.0081, grad_fn=<AmaxBackward0>)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html | pytorch docs |
torch.sparse.spdiags
torch.sparse.spdiags(diagonals, offsets, shape, layout=None) -> Tensor
Creates a sparse 2D tensor by placing the values from rows of
"diagonals" along specified diagonals of the output
The "offsets" tensor controls which diagonals are set.
If "offsets[i]" = 0, it is the main diagonal
If "offsets[i]" < 0, it is below the main diagonal
If "offsets[i]" > 0, it is above the main diagonal
The number of rows in "diagonals" must match the length of
"offsets", and an offset may not be repeated.
Parameters:
* diagonals (Tensor) -- Matrix storing diagonals row-wise
* **offsets** (*Tensor*) -- The diagonals to be set, stored as a
vector
* **shape** (*2-tuple of ints*) -- The desired shape of the
result
Keyword Arguments:
layout ("torch.layout", optional) -- The desired layout of
the returned tensor. "torch.sparse_coo", "torch.sparse_csc" and | https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html | pytorch docs |
"torch.sparse_csr" are supported. Default: "torch.sparse_coo"
Examples:
Set the main and first two lower diagonals of a matrix:
>>> diags = torch.arange(9).reshape(3, 3)
>>> diags
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> s = torch.sparse.spdiags(diags, torch.tensor([0, -1, -2]), (3, 3))
>>> s
tensor(indices=tensor([[0, 1, 2, 1, 2, 2],
[0, 1, 2, 0, 1, 0]]),
values=tensor([0, 1, 2, 3, 4, 6]),
size=(3, 3), nnz=6, layout=torch.sparse_coo)
>>> s.to_dense()
tensor([[0, 0, 0],
[3, 1, 0],
[6, 4, 2]])
Change the output layout:
>>> diags = torch.arange(9).reshape(3, 3)
>>> diags
tensor([[0, 1, 2],[3, 4, 5], [6, 7, 8])
>>> s = torch.sparse.spdiags(diags, torch.tensor([0, -1, -2]), (3, 3), layout=torch.sparse_csr)
>>> s
tensor(crow_indices=tensor([0, 1, 3, 6]),
| https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html | pytorch docs |
tensor(crow_indices=tensor([0, 1, 3, 6]),
col_indices=tensor([0, 0, 1, 0, 1, 2]),
values=tensor([0, 3, 1, 6, 4, 2]), size=(3, 3), nnz=6,
layout=torch.sparse_csr)
>>> s.to_dense()
tensor([[0, 0, 0],
[3, 1, 0],
[6, 4, 2]])
Set partial diagonals of a large output:
>>> diags = torch.tensor([[1, 2], [3, 4]])
>>> offsets = torch.tensor([0, -1])
>>> torch.sparse.spdiags(diags, offsets, (5, 5)).to_dense()
tensor([[1, 0, 0, 0, 0],
[3, 2, 0, 0, 0],
[0, 4, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
Note:
When setting the values along a given diagonal the index into the
diagonal and the index into the row of "diagonals" is taken as
the column index in the output. This has the effect that when
setting a diagonal with a positive offset *k* the first value
along that diagonal will be the value in position *k* of the row
| https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html | pytorch docs |
of "diagonals"
Specifying a positive offset:
>>> diags = torch.tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3]])
>>> torch.sparse.spdiags(diags, torch.tensor([0, 1, 2]), (5, 5)).to_dense()
tensor([[1, 2, 3, 0, 0],
[0, 2, 3, 0, 0],
[0, 0, 3, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
| https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html | pytorch docs |
torch.Tensor.float_power_
Tensor.float_power_(exponent) -> Tensor
In-place version of "float_power()" | https://pytorch.org/docs/stable/generated/torch.Tensor.float_power_.html | pytorch docs |
torch.Tensor.igamma
Tensor.igamma(other) -> Tensor
See "torch.igamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.igamma.html | pytorch docs |
torch.compiled_with_cxx11_abi
torch.compiled_with_cxx11_abi()
Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1 | https://pytorch.org/docs/stable/generated/torch.compiled_with_cxx11_abi.html | pytorch docs |
ExternalStream
class torch.cuda.ExternalStream(stream_ptr, device=None, **kwargs)
Wrapper around an externally allocated CUDA stream.
This class is used to wrap streams allocated in other libraries in
order to facilitate data exchange and multi-library interactions.
Note:
This class doesn't manage the stream life-cycle, it is the user
responsibility to keep the referenced stream alive while this
class is being used.
Parameters:
* stream_ptr (int) -- Integer representation of the
cudaStream_t value. allocated externally.
* **device** (*torch.device** or **int**, **optional*) -- the
device where the stream was originally allocated. if device is
specified incorrectly, subsequent launches using this stream
may fail.
query()
Checks if all the work submitted has been completed.
Returns:
A boolean indicating if all kernels in this stream are
completed.
| https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html | pytorch docs |
completed.
record_event(event=None)
Records an event.
Parameters:
**event** (*torch.cuda.Event**, **optional*) -- event to
record. If not given, a new one will be allocated.
Returns:
Recorded event.
synchronize()
Wait for all the kernels in this stream to complete.
Note:
This is a wrapper around "cudaStreamSynchronize()": see CUDA
Stream documentation for more info.
wait_event(event)
Makes all future work submitted to the stream wait for an event.
Parameters:
**event** (*torch.cuda.Event*) -- an event to wait for.
Note:
This is a wrapper around "cudaStreamWaitEvent()": see CUDA
Stream documentation for more info.This function returns
without waiting for "event": only future operations are
affected.
wait_stream(stream)
Synchronizes with another stream.
All future work submitted to this stream will wait until all
| https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html | pytorch docs |
kernels submitted to a given stream at the time of call
complete.
Parameters:
**stream** (*Stream*) -- a stream to synchronize.
Note:
This function returns without waiting for currently enqueued
kernels in "stream": only future operations are affected.
| https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html | pytorch docs |
torch.tanh
torch.tanh(input, *, out=None) -> Tensor
Returns a new tensor with the hyperbolic tangent of the elements of
"input".
\text{out}_{i} = \tanh(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.8986, -0.7279, 1.1745, 0.2611])
>>> torch.tanh(a)
tensor([ 0.7156, -0.6218, 0.8257, 0.2553])
| https://pytorch.org/docs/stable/generated/torch.tanh.html | pytorch docs |
torch.exp
torch.exp(input, *, out=None) -> Tensor
Returns a new tensor with the exponential of the elements of the
input tensor "input".
y_{i} = e^{x_{i}}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.exp(torch.tensor([0, math.log(2.)]))
tensor([ 1., 2.])
| https://pytorch.org/docs/stable/generated/torch.exp.html | pytorch docs |
Rprop
class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50), *, foreach=None, maximize=False, differentiable=False)
Implements the resilient backpropagation algorithm.
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \theta_0 \in \mathbf{R}^d \text{
(params)},f(\theta) \text{ (objective)},
\\ &\hspace{13mm} \eta_{+/-} \text{ (etaplus,
etaminus)}, \Gamma_{max/min} \text{ (step sizes)}
\\ &\textbf{initialize} : g^0_{prev} \leftarrow 0,
\: \eta_0 \leftarrow \text{lr (learning rate)}
\\ &\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\ &\hspace{5mm}g_t
\leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\
&\hspace{5mm} \textbf{for} \text{ } i = 0, 1, \ldots, d-1 \:
\mathbf{do} \\ &\hspace{10mm} \textbf{if} \:
| https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
g^i_{prev} g^i_t > 0 \
&\hspace{15mm} \eta^i_t \leftarrow \mathrm{min}(\eta^i_{t-1}
\eta_{+}, \Gamma_{max})
\ &\hspace{10mm} \textbf{else if} \: g^i_{prev} g^i_t <
0 \ &\hspace{15mm} \eta^i_t
\leftarrow \mathrm{max}(\eta^i_{t-1} \eta_{-},
\Gamma_{min})
\ &\hspace{15mm} g^i_t \leftarrow 0
\ &\hspace{10mm} \textbf{else} \:
\ &\hspace{15mm} \eta^i_t \leftarrow \eta^i_{t-1}
\ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1}- \eta_t
\mathrm{sign}(g_t) \ &\hspace{5mm}g_{prev}
\leftarrow g_t
\ &\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to the paper A
Direct Adaptive Method for Faster Backpropagation Learning: The
RPROP Algorithm.
Parameters: | https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
RPROP Algorithm.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
1e-2)
* **etas** (*Tuple**[**float**, **float**]**, **optional*) --
pair of (etaminus, etaplus), that are multiplicative increase
and decrease factors (default: (0.5, 1.2))
* **step_sizes** (*Tuple**[**float**, **float**]**, **optional*)
-- a pair of minimal and maximal allowed step sizes (default:
(1e-6, 50))
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used. If unspecified by the
user (so foreach is None), we will try to use foreach over the
for-loop implementation on CUDA, since it is usually
significantly more performant. (default: None)
* **maximize** (*bool**, **optional*) -- maximize the params
| https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
based on the objective, instead of minimizing (default: False)
* **differentiable** (*bool**, **optional*) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
| https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
| https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
| https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html | pytorch docs |
torch.fft.irfftn
torch.fft.irfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor
Computes the inverse of "rfftn()".
"input" is interpreted as a one-sided Hermitian signal in the
Fourier domain, as produced by "rfftn()". By the Hermitian
property, the output will be real-valued.
Note:
Some input frequencies must be real-valued to satisfy the
Hermitian property. In these cases the imaginary component will
be ignored. For example, any imaginary component in the zero-
frequency term cannot be represented in a real output and so will
always be ignored.
Note:
The correct interpretation of the Hermitian input depends on the
length of the original data, as given by "s". This is because
each input shape could correspond to either an odd or even length
signal. By default, the signal is assumed to be even length and
odd signals will not round-trip properly. So, it is recommended
| https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html | pytorch docs |
to always pass the signal shape "s".
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimensions. With default arguments,
the size of last dimension should be (2^n + 1) as argument *s*
defaults to even output size = 2 * (last_dim_size - 1)
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the real FFT. If a length "-1" is specified, no
padding is done in that dimension. Defaults to even output in
the last dimension: "s[-1] = 2*(input.size(dim[-1]) - 1)".
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. The last dimension must be the half-Hermitian
| https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html | pytorch docs |
compressed dimension. Default: all dimensions, or the last
"len(s)" dimensions if "s" is given.
* **norm** (*str**, **optional*) --
Normalization mode. For the backward transform ("irfftn()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the real IFFT
orthonormal)
Where "n = prod(s)" is the logical IFFT size. Calling the
forward transform ("rfftn()") with the same normalization mode
will apply an overall normalization of "1/n" between the two
transforms. This is required to make "irfftn()" the exact
inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.rand(10, 9)
T = torch.fft.rfftn(t)
Without specifying the output length to "irfft()", the output will | https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html | pytorch docs |
not round-trip properly because the input is odd-length in the last
dimension:
torch.fft.irfftn(T).size()
torch.Size([10, 8])
So, it is recommended to always pass the signal shape "s".
roundtrip = torch.fft.irfftn(T, t.size())
roundtrip.size()
torch.Size([10, 9])
torch.testing.assert_close(roundtrip, t, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html | pytorch docs |
torch.Tensor.char
Tensor.char(memory_format=torch.preserve_format) -> Tensor
"self.char()" is equivalent to "self.to(torch.int8)". See "to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.char.html | pytorch docs |
torch.linalg.inv_ex
torch.linalg.inv_ex(A, *, check_errors=False, out=None)
Computes the inverse of a square matrix if it is invertible.
Returns a namedtuple "(inverse, info)". "inverse" contains the
result of inverting "A" and "info" stores the LAPACK error codes.
If "A" is not an invertible matrix, or if it's a batch of matrices
and one or more of them is not an invertible matrix, then "info"
stores a positive integer for the corresponding matrix. The
positive integer indicates the diagonal element of the LU
decomposition of the input matrix that is exactly zero. "info"
filled with zeros indicates that the inversion was successful. If
"check_errors=True" and "info" contains positive integers, then a
RuntimeError is thrown.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
Note: | https://pytorch.org/docs/stable/generated/torch.linalg.inv_ex.html | pytorch docs |
Note:
When the inputs are on a CUDA device, this function synchronizes
only when "check_errors"*= True*.
Warning:
This function is "experimental" and it may change in a future
PyTorch release.
See also:
"torch.linalg.inv()" is a NumPy compatible variant that always
checks for errors.
Parameters:
* A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions consisting of square matrices.
* **check_errors** (*bool**, **optional*) -- controls whether to
check the content of "info". Default: *False*.
Keyword Arguments:
out (tuple, optional) -- tuple of two tensors to write
the output to. Ignored if None. Default: None.
Examples:
>>> A = torch.randn(3, 3)
>>> Ainv, info = torch.linalg.inv_ex(A)
>>> torch.dist(torch.linalg.inv(A), Ainv)
tensor(0.)
>>> info
tensor(0, dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.linalg.inv_ex.html | pytorch docs |
torch.nn.functional.binary_cross_entropy_with_logits
torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)
Function that measures Binary Cross Entropy between target and
input logits.
See "BCEWithLogitsLoss" for details.
Parameters:
* input (Tensor) -- Tensor of arbitrary shape as
unnormalized scores (often referred to as logits).
* **target** (*Tensor*) -- Tensor of the same shape as input
with values between 0 and 1
* **weight** (*Tensor**, **optional*) -- a manual rescaling
weight if provided it's repeated to match input tensor shape
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
| https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html | pytorch docs |
multiple elements per sample. If the field "size_average" is
set to "False", the losses are instead summed for each
minibatch. Ignored when reduce is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
| https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html | pytorch docs |
two args will override "reduction". Default: "'mean'"
* **pos_weight** (*Tensor**, **optional*) -- a weight of
positive examples. Must be a vector with length equal to the
number of classes.
Return type:
Tensor
Examples:
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> loss = F.binary_cross_entropy_with_logits(input, target)
>>> loss.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html | pytorch docs |
CyclicLR
class torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=- 1, verbose=False)
Sets the learning rate of each parameter group according to
cyclical learning rate policy (CLR). The policy cycles the learning
rate between two boundaries with a constant frequency, as detailed
in the paper Cyclical Learning Rates for Training Neural Networks.
The distance between the two boundaries can be scaled on a per-
iteration or per-cycle basis.
Cyclical learning rate policy changes the learning rate after every
batch. step should be called after a batch has been used for
training.
This class has three built-in policies, as put forth in the paper:
"triangular": A basic triangular cycle without amplitude scaling.
"triangular2": A basic triangular cycle that scales initial
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html | pytorch docs |
amplitude by half each cycle.
"exp_range": A cycle that scales initial amplitude by
\text{gamma}^{\text{cycle iterations}} at each cycle iteration.
This implementation was adapted from the github repo:
bckenstler/CLR
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **base_lr** (*float** or **list*) -- Initial learning rate
which is the lower boundary in the cycle for each parameter
group.
* **max_lr** (*float** or **list*) -- Upper learning rate
boundaries in the cycle for each parameter group.
Functionally, it defines the cycle amplitude (max_lr -
base_lr). The lr at any cycle is the sum of base_lr and some
scaling of the amplitude; therefore max_lr may not actually be
reached depending on scaling function.
* **step_size_up** (*int*) -- Number of training iterations in
the increasing half of a cycle. Default: 2000
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html | pytorch docs |
step_size_down (int) -- Number of training iterations in
the decreasing half of a cycle. If step_size_down is None, it
is set to step_size_up. Default: None
mode (str) -- One of {triangular, triangular2,
exp_range}. Values correspond to policies detailed above. If
scale_fn is not None, this argument is ignored. Default:
'triangular'
gamma (float) -- Constant in 'exp_range' scaling
function: gamma**(cycle iterations) Default: 1.0
scale_fn (function) -- Custom scaling policy defined by
a single argument lambda function, where 0 <= scale_fn(x) <= 1
for all x >= 0. If specified, then 'mode' is ignored. Default:
None
scale_mode (str) -- {'cycle', 'iterations'}. Defines
whether scale_fn is evaluated on cycle number or cycle
iterations (training iterations since start of cycle).
Default: 'cycle'
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html | pytorch docs |
Default: 'cycle'
* **cycle_momentum** (*bool*) -- If "True", momentum is cycled
inversely to learning rate between 'base_momentum' and
'max_momentum'. Default: True
* **base_momentum** (*float** or **list*) -- Lower momentum
boundaries in the cycle for each parameter group. Note that
momentum is cycled inversely to learning rate; at the peak of
a cycle, momentum is 'base_momentum' and learning rate is
'max_lr'. Default: 0.8
* **max_momentum** (*float** or **list*) -- Upper momentum
boundaries in the cycle for each parameter group.
Functionally, it defines the cycle amplitude (max_momentum -
base_momentum). The momentum at any cycle is the difference of
max_momentum and some scaling of the amplitude; therefore
base_momentum may not actually be reached depending on scaling
function. Note that momentum is cycled inversely to learning
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html | pytorch docs |
rate; at the start of a cycle, momentum is 'max_momentum' and
learning rate is 'base_lr' Default: 0.9
* **last_epoch** (*int*) -- The index of the last batch. This
parameter is used when resuming a training job. Since *step()*
should be invoked after each batch instead of after each
epoch, this number represents the total number of *batches*
computed, not the total number of epochs computed. When
last_epoch=-1, the schedule is started from the beginning.
Default: -1
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1)
data_loader = torch.utils.data.DataLoader(...)
for epoch in range(10):
for batch in data_loader:
train_batch(...)
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html | pytorch docs |
train_batch(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
get_lr()
Calculates the learning rate at batch index. This function
treats *self.last_epoch* as the last batch index.
If *self.cycle_momentum* is "True", this function has a side
effect of updating the optimizer's momentum.
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html | pytorch docs |
torch.onnx diagnostics
* Overview
Diagnostic Rules
API Reference
Overview
NOTE: This feature is underdevelopment and is subject to change.
The goal is to improve the diagnostics to help users debug and improve
their model export to ONNX.
The diagnostics are emitted in machine parsable Static Analysis
Results Interchange Format (SARIF).
A new clearer, structured way to add new and keep track of
diagnostic rules.
Serve as foundation for more future improvements consuming the
diagnostics.
Diagnostic Rules
POE0001:node-missing-onnx-shape-inference
POE0002:missing-custom-symbolic-function
POE0003:missing-standard-symbolic-function
POE0004:operator-supported-in-newer-opset-version
API Reference
class torch.onnx._internal.diagnostics.ExportDiagnostic(args, *kwargs)
Base class for all export diagnostics.
This class is used to represent all export diagnostics. It is a | https://pytorch.org/docs/stable/onnx_diagnostics.html | pytorch docs |
subclass of infra.Diagnostic, and adds additional methods to add
more information to the diagnostic.
record_cpp_call_stack(frames_to_skip)
Records the current C++ call stack in the diagnostic.
record_python_call_stack(frames_to_skip)
Records the current Python call stack in the diagnostic.
class torch.onnx._internal.diagnostics.infra.DiagnosticEngine
A generic diagnostic engine based on SARIF.
This class is the main interface for diagnostics. It manages the
creation of diagnostic contexts. A DiagnosticContext provides the
entry point for recording Diagnostics. See infra.DiagnosticContext
for more details.
-[ Examples ]-
Step 1: Create a set of rules. >>> rules =
infra.RuleCollection.custom_collection_from_list( ...
"CustomRuleCollection", ... [ ... infra.Rule( ...
id="r1", ... name="rule-1", ...
message_default_template="Mising xxx", ... ), ... ],
... ) | https://pytorch.org/docs/stable/onnx_diagnostics.html | pytorch docs |
... )
Step 2: Create a diagnostic engine. >>> engine = DiagnosticEngine()
Step 3: Start a new diagnostic context. >>> with
engine.create_diagnostic_context("torch.onnx.export",
version="1.0") as context: ... ...
Step 4: Add diagnostics in your code. ...
context.diagnose(rules.rule1, infra.Level.ERROR)
Step 5: Afterwards, get the SARIF log. >>> sarif_log =
engine.sarif_log()
clear()
Clears all diagnostic contexts.
create_diagnostic_context(name, version, options=None, diagnostic_type=)
Creates a new diagnostic context.
Parameters:
* **name** (*str*) -- The subject name for the diagnostic
context.
* **version** (*str*) -- The subject version for the
diagnostic context.
* **options** (*Optional**[**DiagnosticOptions**]*) -- The
options for the diagnostic context.
Returns:
A new diagnostic context.
| https://pytorch.org/docs/stable/onnx_diagnostics.html | pytorch docs |
Returns:
A new diagnostic context.
Return type:
*DiagnosticContext*
pretty_print(verbose=False, level=Level.ERROR)
Pretty prints all diagnostics in the diagnostic contexts.
Parameters:
* **verbose** (*bool*) -- Whether to print the diagnostics in
verbose mode. See Diagnostic.pretty_print.
* **level** (*Level*) -- The minimum level of diagnostics to
print.
| https://pytorch.org/docs/stable/onnx_diagnostics.html | pytorch docs |
Benchmark Utils - torch.utils.benchmark
class torch.utils.benchmark.Timer(stmt='pass', setup='pass', global_setup='', timer=, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=Language.PYTHON)
Helper class for measuring execution time of PyTorch statements.
For a full tutorial on how to use this class, see:
https://pytorch.org/tutorials/recipes/recipes/benchmark.html
The PyTorch Timer is based on timeit.Timer (and in fact uses
timeit.Timer internally), but with several key differences:
Runtime aware:
Timer will perform warmups (important as some elements of
PyTorch are lazily initialized), set threadpool size so that
comparisons are apples-to-apples, and synchronize
asynchronous CUDA functions when necessary.
Focus on replicates:
When measuring code, and particularly complex kernels /
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
models, run-to-run variation is a significant confounding
factor. It is expected that all measurements should include
replicates to quantify noise and allow median computation,
which is more robust than mean. To that effect, this class
deviates from the timeit API by conceptually merging
timeit.Timer.repeat and timeit.Timer.autorange. (Exact
algorithms are discussed in method docstrings.) The timeit
method is replicated for cases where an adaptive strategy is
not desired.
Optional metadata:
When defining a Timer, one can optionally specify label,
sub_label, description, and env. (Defined later) These
fields are included in the representation of result object
and by the Compare class to group and display results for
comparison.
Instruction counts
In addition to wall times, Timer can run a statement under
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
Callgrind and report instructions executed.
Directly analogous to timeit.Timer constructor arguments:
*stmt*, *setup*, *timer*, *globals*
PyTorch Timer specific constructor arguments:
*label*, *sub_label*, *description*, *env*, *num_threads*
Parameters:
* stmt (str) -- Code snippet to be run in a loop and
timed.
* **setup** (*str*) -- Optional setup code. Used to define
variables used in *stmt*
* **global_setup** (*str*) -- (C++ only) Code which is placed at
the top level of the file for things like *#include*
statements.
* **timer** (*Callable**[**[**]**, **float**]*) -- Callable
which returns the current time. If PyTorch was built without
CUDA or there is no GPU present, this defaults to
*timeit.default_timer*; otherwise it will synchronize CUDA
before measuring the time.
* **globals** (*Optional**[**Dict**[**str**, **Any**]**]*) -- A
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
dict which defines the global variables when stmt is being
executed. This is the other method for providing variables
which stmt needs.
* **label** (*Optional**[**str**]*) -- String which summarizes
*stmt*. For instance, if *stmt* is
"torch.nn.functional.relu(torch.add(x, 1, out=out))" one might
set label to "ReLU(x + 1)" to improve readability.
* **sub_label** (*Optional**[**str**]*) --
Provide supplemental information to disambiguate measurements
with identical stmt or label. For instance, in our example
above sub_label might be "float" or "int", so that it is easy
to differentiate: "ReLU(x + 1): (float)"
"ReLU(x + 1): (int)" when printing Measurements or summarizing
using *Compare*.
* **description** (*Optional**[**str**]*) --
String to distinguish measurements with identical label and
sub_label. The principal use of *description* is to signal to
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
Compare the columns of data. For instance one might set it
based on the input size to create a table of the form:
| n=1 | n=4 | ...
------------- ...
ReLU(x + 1): (float) | ... | ... | ...
ReLU(x + 1): (int) | ... | ... | ...
using *Compare*. It is also included when printing a
Measurement.
* **env** (*Optional**[**str**]*) -- This tag indicates that
otherwise identical tasks were run in different environments,
and are therefore not equivalent, for instance when A/B
testing a change to a kernel. *Compare* will treat
Measurements with different *env* specification as distinct
when merging replicate runs.
* **num_threads** (*int*) -- The size of the PyTorch threadpool
when executing *stmt*. Single threaded performance is
important as both a key inference workload and a good
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
indicator of intrinsic algorithmic efficiency, so the default
is set to one. This is in contrast to the default PyTorch
threadpool size which tries to utilize all cores.
blocked_autorange(callback=None, min_run_time=0.2)
Measure many replicates while keeping timer overhead to a
minimum.
At a high level, blocked_autorange executes the following
pseudo-code:
`setup`
total_time = 0
while total_time < min_run_time
start = timer()
for _ in range(block_size):
`stmt`
total_time += (timer() - start)
Note the variable *block_size* in the inner loop. The choice of
block size is important to measurement quality, and must balance
two competing objectives:
1. A small block size results in more replicates and
generally better statistics.
2. A large block size better amortizes the cost of *timer*
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
invocation, and results in a less biased measurement. This
is important because CUDA synchronization time is non-
trivial (order single to low double digit microseconds)
and would otherwise bias the measurement.
blocked_autorange sets block_size by running a warmup period,
increasing block size until timer overhead is less than 0.1% of
the overall computation. This value is then used for the main
measurement loop.
Returns:
A *Measurement* object that contains measured runtimes and
repetition counts, and can be used to compute statistics.
(mean, median, etc.)
Return type:
*Measurement*
collect_callgrind(number: int, , repeats: None, collect_baseline: bool, retain_out_file: bool) -> CallgrindStats
collect_callgrind(number: int, , repeats: int, collect_baseline: bool, retain_out_file: bool) -> Tuple[CallgrindStats, ...]
Collect instruction counts using Callgrind.
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
Collect instruction counts using Callgrind.
Unlike wall times, instruction counts are deterministic (modulo
non-determinism in the program itself and small amounts of
jitter from the Python interpreter.) This makes them ideal for
detailed performance analysis. This method runs *stmt* in a
separate process so that Valgrind can instrument the program.
Performance is severely degraded due to the instrumentation,
however this is ameliorated by the fact that a small number of
iterations is generally sufficient to obtain good measurements.
In order to to use this method *valgrind*, *callgrind_control*,
and *callgrind_annotate* must be installed.
Because there is a process boundary between the caller (this
process) and the *stmt* execution, *globals* cannot contain
arbitrary in-memory data structures. (Unlike timing methods)
Instead, globals are restricted to builtins, *nn.Modules*'s, and
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
TorchScripted functions/modules to reduce the surprise factor
from serialization and subsequent deserialization. The
GlobalsBridge class provides more detail on this subject. Take
particular care with nn.Modules: they rely on pickle and you may
need to add an import to setup for them to transfer properly.
By default, a profile for an empty statement will be collected
and cached to indicate how many instructions are from the Python
loop which drives *stmt*.
Returns:
A *CallgrindStats* object which provides instruction counts
and some basic facilities for analyzing and manipulating
results.
timeit(number=1000000)
Mirrors the semantics of timeit.Timer.timeit().
Execute the main statement (*stmt*) *number* times. https://doc
s.python.org/3/library/timeit.html#timeit.Timer.timeit
Return type:
*Measurement*
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
Return type:
Measurement
class torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None)
The result of a Timer measurement.
This class stores one or more measurements of a given statement. It
is serializable and provides several convenience methods (including
a detailed repr) for downstream consumers.
static merge(measurements)
Convenience method for merging replicates.
Merge will extrapolate times to *number_per_run=1* and will not
transfer any metadata. (Since it might differ between
replicates)
Return type:
*List*[*Measurement*]
property significant_figures: int
Approximate significant figure estimate.
This property is intended to give a convenient way to estimate
the precision of a measurement. It only uses the interquartile
region to estimate statistics to try to mitigate skew from the
tails, and uses a static z value of 1.645 since it is not
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
expected to be used for small values of n, so z can
approximate t.
The significant figure estimation used in conjunction with the
*trim_sigfig* method to provide a more human interpretable data
summary. __repr__ does not use this method; it simply displays
raw values. Significant figure estimation is intended for
*Compare*.
class torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats, stmt_callgrind_out)
Top level container for Callgrind results collected by Timer.
Manipulation is generally done using the FunctionCounts class,
which is obtained by calling CallgrindStats.stats(...). Several
convenience methods are provided as well; the most significant is
CallgrindStats.as_standardized().
as_standardized()
Strip library names and some prefixes from function strings.
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
When comparing two different sets of instruction counts, on
stumbling block can be path prefixes. Callgrind includes the
full filepath when reporting a function (as it should). However,
this can cause issues when diffing profiles. If a key component
such as Python or PyTorch was built in separate locations in the
two profiles, which can result in something resembling:
23234231 /tmp/first_build_dir/thing.c:foo(...)
9823794 /tmp/first_build_dir/thing.c:bar(...)
...
53453 .../aten/src/Aten/...:function_that_actually_changed(...)
...
-9823794 /tmp/second_build_dir/thing.c:bar(...)
-23234231 /tmp/second_build_dir/thing.c:foo(...)
Stripping prefixes can ameliorate this issue by regularizing the
strings and causing better cancellation of equivalent call sites
when diffing.
Return type:
*CallgrindStats*
counts(*, denoise=False) | https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
counts(*, denoise=False)
Returns the total number of instructions executed.
See *FunctionCounts.denoise()* for an explanation of the
*denoise* arg.
Return type:
int
delta(other, inclusive=False)
Diff two sets of counts.
One common reason to collect instruction counts is to determine
the the effect that a particular change will have on the number
of instructions needed to perform some unit of work. If a change
increases that number, the next logical question is "why". This
generally involves looking at what part if the code increased in
instruction count. This function automates that process so that
one can easily diff counts on both an inclusive and exclusive
basis.
Return type:
*FunctionCounts*
stats(inclusive=False)
Returns detailed function counts.
Conceptually, the FunctionCounts returned can be thought of as a
tuple of (count, path_and_function_name) tuples.
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
inclusive matches the semantics of callgrind. If True, the
counts include instructions executed by children.
inclusive=True is useful for identifying hot spots in code;
inclusive=False is useful for reducing noise when diffing
counts from two different runs. (See CallgrindStats.delta(...)
for more details)
Return type:
*FunctionCounts*
class torch.utils.benchmark.FunctionCounts(_data, inclusive, truncate_rows=True, _linewidth=None)
Container for manipulating Callgrind results.
It supports:
1. Addition and subtraction to combine or diff results.
2. Tuple-like indexing.
3. A *denoise* function which strips CPython calls which are
known to be non-deterministic and quite noisy.
4. Two higher order methods (*filter* and *transform*) for
custom manipulation.
denoise()
Remove known noisy instructions.
Several instructions in the CPython interpreter are rather
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
noisy. These instructions involve unicode to dictionary lookups
which Python uses to map variable names. FunctionCounts is
generally a content agnostic container, however this is
sufficiently important for obtaining reliable results to warrant
an exception.
Return type:
*FunctionCounts*
filter(filter_fn)
Keep only the elements where *filter_fn* applied to function
name returns True.
Return type:
*FunctionCounts*
transform(map_fn)
Apply *map_fn* to all of the function names.
This can be used to regularize function names (e.g. stripping
irrelevant parts of the file path), coalesce entries by mapping
multiple functions to the same name (in which case the counts
are added together), etc.
Return type:
*FunctionCounts*
| https://pytorch.org/docs/stable/benchmark_utils.html | pytorch docs |
CUDA Stream Sanitizer
Note:
This is a prototype feature, which means it is at an early stage for
feedback and testing, and its components are subject to change.
Overview
This module introduces CUDA Sanitizer, a tool for detecting
synchronization errors between kernels ran on different streams. It
stores information on accesses to tensors to determine if they are
synchronized or not. When enabled in a python program and a possible
data race is detected, a detailed warning will be printed and the
program will exit.
It can be enabled either by importing this module and calling
"enable_cuda_sanitizer()" or by exporting the "TORCH_CUDA_SANITIZER"
environment variable.
Usage
Here is an example of a simple synchronization error in PyTorch:
import torch
a = torch.rand(4, 2, device="cuda")
with torch.cuda.stream(torch.cuda.Stream()):
torch.mul(a, 5, out=a)
The "a" tensor is initialized on the default stream and, without any | https://pytorch.org/docs/stable/cuda._sanitizer.html | pytorch docs |
synchronization methods, modified on a new stream. The two kernels
will run concurrently on the same tensor, which might cause the second
kernel to read uninitialized data before the first one was able to
write it, or the first kernel might overwrite part of the result of
the second. When this script is run on the commandline with:
TORCH_CUDA_SANITIZER=1 python example_error.py
the following output is printed by CSAN:
============================
CSAN detected a possible data race on tensor with data pointer 139719969079296
Access by stream 94646435460352 during kernel:
aten::mul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)
writing to argument(s) self, out, and to the output
With stack trace:
File "example_error.py", line 6, in
torch.mul(a, 5, out=a)
...
File "pytorch/torch/cuda/_sanitizer.py", line 364, in _handle_kernel_launch
stack_trace = traceback.StackSummary.extract(
Previous access by stream 0 during kernel: | https://pytorch.org/docs/stable/cuda._sanitizer.html | pytorch docs |
Previous access by stream 0 during kernel:
aten::rand(int[] size, *, int? dtype=None, Device? device=None) -> Tensor
writing to the output
With stack trace:
File "example_error.py", line 3, in
a = torch.rand(10000, device="cuda")
...
File "pytorch/torch/cuda/_sanitizer.py", line 364, in _handle_kernel_launch
stack_trace = traceback.StackSummary.extract(
Tensor was allocated with stack trace:
File "example_error.py", line 3, in
a = torch.rand(10000, device="cuda")
...
File "pytorch/torch/cuda/_sanitizer.py", line 420, in _handle_memory_allocation
traceback.StackSummary.extract(
This gives extensive insight into the origin of the error:
A tensor was incorrectly accessed from streams with ids: 0 (default
stream) and 94646435460352 (new stream)
The tensor was allocated by invoking "a = torch.rand(10000,
device="cuda")"
The faulty accesses were caused by operators
| https://pytorch.org/docs/stable/cuda._sanitizer.html | pytorch docs |
The faulty accesses were caused by operators
"a = torch.rand(10000, device="cuda")" on stream 0
"torch.mul(a, 5, out=a)" on stream 94646435460352
The error message also displays the schemas of the invoked
operators, along with a note showing which arguments of the
operators correspond to the affected tensor.
In the example, it can be seen that tensor "a" corresponds to
arguments "self", "out" and the "output" value of the invoked
operator "torch.mul".
See also:
The list of supported torch operators and their schemas can be
viewed here.
The bug can be fixed by forcing the new stream to wait for the default
stream:
with torch.cuda.stream(torch.cuda.Stream()):
torch.cuda.current_stream().wait_stream(torch.cuda.default_stream())
torch.mul(a, 5, out=a)
When the script is run again, there are no errors reported.
API Reference
torch.cuda._sanitizer.enable_cuda_sanitizer()
Enables CUDA Sanitizer. | https://pytorch.org/docs/stable/cuda._sanitizer.html | pytorch docs |
Enables CUDA Sanitizer.
The sanitizer will begin to analyze low-level CUDA calls invoked by
torch functions for synchronization errors. All data races found
will be printed to the standard error output along with stack
traces of suspected causes. For best results, the sanitizer should
be enabled at the very beginning of the program. | https://pytorch.org/docs/stable/cuda._sanitizer.html | pytorch docs |
torch::deploy has been moved to pytorch/multipy
"torch::deploy" has been moved to its new home at
https://github.com/pytorch/multipy. | https://pytorch.org/docs/stable/deploy.html | pytorch docs |
Complex Numbers
Note:
When using complex numbers, use Pytorch with CUDA 11.6 downloaded
via pip wheel as described in Get Started and select the CUDA 11.6
pip package.
Complex numbers are numbers that can be expressed in the form a + bj,
where a and b are real numbers, and j is called the imaginary unit,
which satisfies the equation j^2 = -1. Complex numbers frequently
occur in mathematics and engineering, especially in topics like signal
processing. Traditionally many users and libraries (e.g., TorchAudio)
have handled complex numbers by representing the data in float tensors
with shape (..., 2) where the last dimension contains the real and
imaginary values.
Tensors of complex dtypes provide a more natural user experience while
working with complex numbers. Operations on complex tensors (e.g.,
"torch.mv()", "torch.matmul()") are likely to be faster and more
memory efficient than operations on float tensors mimicking them. | https://pytorch.org/docs/stable/complex_numbers.html | pytorch docs |
Operations involving complex numbers in PyTorch are optimized to use
vectorized assembly instructions and specialized kernels (e.g. LAPACK,
cuBlas).
Note:
Spectral operations in the torch.fft module support native complex
tensors.
Warning:
Complex tensors is a beta feature and subject to change.
Creating Complex Tensors
We support two complex dtypes: torch.cfloat and torch.cdouble
x = torch.randn(2,2, dtype=torch.cfloat)
x
tensor([[-0.4621-0.0303j, -0.2438-0.5874j],
[ 0.7706+0.1421j, 1.2110+0.1918j]])
Note:
The default dtype for complex tensors is determined by the default
floating point dtype. If the default floating point dtype is
torch.float64 then complex numbers are inferred to have a dtype of
torch.complex128, otherwise they are assumed to have a dtype of
torch.complex64.
All factory functions apart from "torch.linspace()",
"torch.logspace()", and "torch.arange()" are supported for complex
tensors. | https://pytorch.org/docs/stable/complex_numbers.html | pytorch docs |
tensors.
Transition from the old representation
Users who currently worked around the lack of complex tensors with
real tensors of shape (..., 2) can easily to switch using the complex
tensors in their code using "torch.view_as_complex()" and
"torch.view_as_real()". Note that these functions donât perform any
copy and return a view of the input tensor.
x = torch.randn(3, 2)
x
tensor([[ 0.6125, -0.1681],
[-0.3773, 1.3487],
[-0.0861, -0.7981]])
y = torch.view_as_complex(x)
y
tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])
torch.view_as_real(y)
tensor([[ 0.6125, -0.1681],
[-0.3773, 1.3487],
[-0.0861, -0.7981]])
Accessing real and imag
The real and imaginary values of a complex tensor can be accessed
using the "real" and "imag".
Note:
Accessing real and imag attributes doesn't allocate any memory, | https://pytorch.org/docs/stable/complex_numbers.html | pytorch docs |
and in-place updates on the real and imag tensors will update
the original complex tensor. Also, the returned real and imag
tensors are not contiguous.
y.real
tensor([ 0.6125, -0.3773, -0.0861])
y.imag
tensor([-0.1681, 1.3487, -0.7981])
y.real.mul_(2)
tensor([ 1.2250, -0.7546, -0.1722])
y
tensor([ 1.2250-0.1681j, -0.7546+1.3487j, -0.1722-0.7981j])
y.real.stride()
(2,)
Angle and abs
The angle and absolute values of a complex tensor can be computed
using "torch.angle()" and "torch.abs()".
x1=torch.tensor([3j, 4+4j])
x1.abs()
tensor([3.0000, 5.6569])
x1.angle()
tensor([1.5708, 0.7854])
Linear Algebra
Many linear algebra operations, like "torch.matmul()", "torch.svd()",
"torch.solve()" etc., support complex numbers. If you'd like to
request an operation we don't currently support, please search if an
issue has already been filed and if not, file one.
Serialization | https://pytorch.org/docs/stable/complex_numbers.html | pytorch docs |
Serialization
Complex tensors can be serialized, allowing data to be saved as
complex values.
torch.save(y, 'complex_tensor.pt')
torch.load('complex_tensor.pt')
tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])
Autograd
PyTorch supports autograd for complex tensors. The gradient computed
is the Conjugate Wirtinger derivative, the negative of which is
precisely the direction of steepest descent used in Gradient Descent
algorithm. Thus, all the existing optimizers work out of the box with
complex parameters. For more details, check out the note Autograd for
Complex Numbers.
We do not fully support the following subsystems:
Quantization
JIT
Sparse Tensors
Distributed
If any of these would help your use case, please search if an issue
has already been filed and if not, file one. | https://pytorch.org/docs/stable/complex_numbers.html | pytorch docs |
FullyShardedDataParallel
class torch.distributed.fsdp.FullyShardedDataParallel(module, process_group=None, sharding_strategy=None, cpu_offload=None, auto_wrap_policy=None, backward_prefetch=BackwardPrefetch.BACKWARD_PRE, mixed_precision=None, ignored_modules=None, param_init_fn=None, device_id=None, sync_module_states=False, forward_prefetch=False, limit_all_gathers=False, use_orig_params=False, ignored_parameters=None)
A wrapper for sharding Module parameters across data parallel
workers. This is inspired by Xu et al. as well as the ZeRO Stage 3
from DeepSpeed. FullyShardedDataParallel is commonly shortened to
FSDP.
Example:
>>> import torch
>>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
>>> torch.cuda.set_device(device_id)
>>> sharded_module = FSDP(my_module)
>>> optim = torch.optim.Adam(sharded_module.parameters(), lr=0.0001)
>>> x = sharded_module(x, y=3, z=torch.Tensor([1]))
>>> loss = x.sum()
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
loss = x.sum()
>>> loss.backward()
>>> optim.step()
Warning:
The optimizer must be initialized *after* the module has been
wrapped, since FSDP will shard parameters in-place and this will
break any previously initialized optimizers.
Warning:
If the destination CUDA device has ID "dev_id", either (1)
"module" should already be placed on that device, (2) the device
should be set using "torch.cuda.set_device(dev_id)", or (3)
"dev_id" should be passed into the "device_id" constructor
argument. This FSDP instance's compute device will be that
destination device. For (1) and (3), the FSDP initialization
always occurs on GPU. For (2), the FSDP initialization happens on
"module" 's current device, which may be CPU.
Warning:
FSDP currently does not support gradient accumulation outside
"no_sync()" when using CPU offloading. Trying to do so yields
incorrect results since FSDP will use the newly-reduced gradient
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
instead of accumulating with any existing gradient.
Warning:
Changing the original parameter variable names after construction
will lead to undefined behavior.
Warning:
Passing in *sync_module_states=True* flag requires module to be
put on GPU, or to use "device_id" argument to specify a CUDA
device that FSDP will move module to. This is because
"sync_module_states=True" requires GPU communication.
Warning:
As of PyTorch 1.12, FSDP only offers limited support for shared
parameters (for example, setting one "Linear" layer's weight to
another's). In particular, modules that share parameters must be
wrapped as part of the same FSDP unit. If enhanced shared
parameter support is needed for your use case, please ping
https://github.com/pytorch/pytorch/issues/77724
Note:
Inputs into FSDP "forward" function will be moved to compute
device (same device FSDP module is on) before running "forward",
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
so user does not have to manually move inputs from CPU -> GPU.
Parameters:
* module (nn.Module) -- This is the module to be wrapped
with FSDP.
* **process_group** (*Optional**[**Union**[**ProcessGroup**,
**Tuple**[**ProcessGroup**, **ProcessGroup**]**]**]*) --
Optional[Union[ProcessGroup, Tuple[ProcessGroup,
ProcessGroup]]] This is the process group used for collective
communications and the one over which the model is sharded.
For hybrid sharding strategies such as
"ShardingStrategy.HYBRID_SHARD" users can pass in a tuple of
process groups representing the groups to shard and replicate
across, respectively.
* **sharding_strategy** (*Optional**[**ShardingStrategy**]*) --
This configures the sharding strategy used by FSDP, which may
trade off memory saving and communication overhead. See
"ShardingStrategy" for details. (Default: "FULL_SHARD")
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
cpu_offload (Optional[CPUOffload]) -- This
configures CPU offloading. If this is set to "None", then no
CPU offloading happens. See "CPUOffload" for details.
(Default: "None")
auto_wrap_policy
(Optional[Union[Callable[[nn.Module,
bool, int], bool], _FSDPPolicy]]) --
This is either "None", an "_FSDPPolicy", or a callable of a
fixed signature. If it is "None", then "module" is wrapped
with only a top-level FSDP instance without any nested
wrapping. If it is an "_FSDPPolicy", then the wrapping follows
the given policy. "ModuleWrapPolicy" in
"torch.distributed.fsdp.wrap.py" is an example. If it is a
callable, then it should take in three arguments "module:
nn.Module", "recurse: bool", and "nonwrapped_numel: int" and
should return a "bool" specifying whether the passed-in
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
"module" should be wrapped if "recurse=False" or if the
traversal should continue down the subtree if "recurse=True".
Additional custom arguments may be added to the callable. The
"size_based_auto_wrap_policy" in
"torch.distributed.fsdp.wrap.py" gives an example callable
that wraps a module if the parameters in its subtree exceed
100M numel. A good practice is to print the model after
wrapping and adjust as needed.
Example:
>>> def custom_auto_wrap_policy(
>>> module: nn.Module,
>>> recurse: bool,
>>> nonwrapped_numel: int,
>>> # Additional custom arguments
>>> min_num_params: int = int(1e8),
>>> ) -> bool:
>>> return nonwrapped_numel >= min_num_params
>>> # Configure a custom `min_num_params`
>>> my_auto_wrap_policy = functools.partial(custom_auto_wrap_policy, min_num_params=int(1e5))
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
backward_prefetch (Optional[BackwardPrefetch]) --
This configures explicit backward prefetching of all-gathers.
See "BackwardPrefetch" for details. (Default: "BACKWARD_PRE")
mixed_precision (Optional[MixedPrecision]) -- This
configures native mixed precision for FSDP. If this is set to
"None", then no mixed precision is used. Otherwise, parameter,
buffer, and gradient reduction dtypes can be set. See
"MixedPrecision" for details. (Default: "None")
ignored_modules
(Optional[Iterable[torch.nn.Module]]) -- Modules
whose own parameters and child modules' parameters and buffers
are ignored by this instance. None of the modules directly in
"ignored_modules" should be "FullyShardedDataParallel"
instances, and any child modules that are already-constructed
"FullyShardedDataParallel" instances will not be ignored if
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
they are nested under this instance. This argument may be used
to avoid sharding specific parameters at module granularity
when using an "auto_wrap_policy" or if parameters' sharding is
not managed by FSDP. (Default: "None")
* **param_init_fn**
(*Optional**[**Callable**[**[**nn.Module**]**, **None**]**]*)
--
A "Callable[torch.nn.Module] -> None" that specifies how
modules that are currently on the meta device should be
initialized onto an actual device. Note that as of v1.12, we
detect modules on the meta device via "is_meta" check and
apply a default initialization that calls "reset_parameters"
method on the passed in "nn.Module" if "param_init_fn" is not
specified, otherwise we run "param_init_fn" to initialize the
passed in "nn.Module". In particular, this means that if
"is_meta=True" for any module parameters for modules that will
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
be wrapped with FSDP and "param_init_fn" is not specified, we
assume your module properly implements a "reset_parameters()"
and will throw errors if not. Note that additionally, we offer
support for modules initialized with torchdistX's
(https://github.com/pytorch/torchdistX) "deferred_init" API.
In this case, deferred modules would be initialized by a
default initialization function that calls torchdistX's
"materialize_module", or the passed in "param_init_fn", if it
is not "None". The same "Callable" is applied to initialize
all meta modules. Note that this initialization function is
applied before doing any FSDP sharding logic.
Example:
>>> module = MyModule(device="meta")
>>> def my_init_fn(module):
>>> # responsible for initializing a module, such as with reset_parameters
>>> ...
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
...
>>> fsdp_model = FSDP(module, param_init_fn=my_init_fn, auto_wrap_policy=size_based_auto_wrap_policy)
>>> print(next(fsdp_model.parameters()).device) # current CUDA device
>>> # With torchdistX
>>> module = deferred_init.deferred_init(MyModule, device="cuda")
>>> # Will initialize via deferred_init.materialize_module().
>>> fsdp_model = FSDP(module, auto_wrap_policy=size_based_auto_wrap_policy)
* **device_id** (*Optional**[**Union**[**int**,
**torch.device**]**]*) -- An "int" or "torch.device"
describing the CUDA device the FSDP module should be moved to
determining where initialization such as sharding takes place.
If this argument is not specified and "module" is on CPU, we
issue a warning mentioning that this argument can be specified
for faster initialization. If specified, resulting FSDP
instances will reside on this device, including moving ignored
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
modules' parameters if needed. Note that if "device_id" is
specified but "module" is already on a different CUDA device,
an error will be thrown. (Default: "None")
* **sync_module_states** (*bool*) -- If "True", each
individually wrapped FSDP unit will broadcast module
parameters from rank 0 to ensure they are the same across all
ranks after initialization. This helps ensure model parameters
are the same across ranks before starting training, but adds
communication overhead to "__init__", as at least one
broadcast is triggered per individually wrapped FSDP unit.
This can also help load checkpoints taken by "state_dict" and
to be loaded by "load_state_dict" in a memory efficient way.
See documentation for "FullStateDictConfig" for an example of
this. (Default: "False")
* **forward_prefetch** (*bool*) -- If "True", then FSDP
*explicitly* prefetches the next upcoming all-gather while
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
executing in the forward pass. This may improve communication
and computation overlap for CPU bound workloads. This should
only be used for static graph models since the forward order
is fixed based on the first iteration's execution. (Default:
"False")
* **limit_all_gathers** (*bool*) -- If "False", then FSDP allows
the CPU thread to schedule all-gathers without any extra
synchronization. If "True", then FSDP explicitly synchronizes
the CPU thread to prevent too many in-flight all-gathers. This
"bool" only affects the sharded strategies that schedule all-
gathers. Enabling this can help lower the number of CUDA
malloc retries.
* **ignored_parameters**
(*Optional**[**Iterable**[**torch.nn.Parameter**]**]*) --
Ignored parameters will not be managed by this FSDP instance,
that means these parameters will not be flattened and sharded
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
by FSDP, their gradients will not be synchronized as well.
With this newly added argument, "ignored_modules" could be
deprecated soon. For backward compatibility, both
"ignored_parameters" and "ignored_modules" are kept for now,
but FSDP only allows one of them to be specified as not
"None".
apply(fn)
Applies "fn" recursively to every submodule (as returned by
".children()") as well as self. Typical use includes
initializing the parameters of a model (see also torch.nn.init).
Compared to "torch.nn.Module.apply", this version additionally
gathers the full parameters before applying "fn". It should not
be called from within another "summon_full_params" context.
Parameters:
**fn** ("Module" -> None) -- function to be applied to each
submodule
Returns:
self
Return type:
Module
clip_grad_norm_(max_norm, norm_type=2.0) | https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
clip_grad_norm_(max_norm, norm_type=2.0)
Clips the gradient norm of all parameters. The norm is computed
over all parameters' gradients as viewed as a single vector, and
the gradients are modified in-place.
Parameters:
* **max_norm** (*float** or **int*) -- max norm of the
gradients
* **norm_type** (*float** or **int*) -- type of the used
p-norm. Can be "'inf'" for infinity norm.
Returns:
Total norm of the parameters (viewed as a single vector).
Return type:
*Tensor*
Note:
If every FSDP instance uses "NO_SHARD", meaning that no
gradients are sharded across ranks, then you may directly use
"torch.nn.utils.clip_grad_norm_()".
Note:
If at least some FSDP instance uses a sharded strategy (i.e.
one other than "NO_SHARD"), then you should use this method
instead of "torch.nn.utils.clip_grad_norm_()" since this
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
method handles the fact that gradients are sharded across
ranks.
Note:
The total norm returned will have the "largest" dtype across
all parameters/gradients as defined by PyTorch's type
promotion semantics. For example, if *all*
parameters/gradients use a low precision dtype, then the
returned norm's dtype will be that low precision dtype, but if
there exists at least one parameter/ gradient using FP32, then
the returned norm's dtype will be FP32.
Warning:
This needs to be called on all ranks since it uses collective
communications.
static flatten_sharded_optim_state_dict(sharded_optim_state_dict, model, optim)
The API is similar to "shard_full_optim_state_dict()". The only
difference is that the input "sharded_optim_state_dict" should
be returned from "sharded_optim_state_dict()". Therefore, there
will be all-gather calls on each rank to gather "ShardedTensor"
s.
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
s.
Parameters:
* **sharded_optim_state_dict** (*Dict**[**str**, **Any**]*)
-- Optimizer state dict corresponding to the unflattened
parameters and holding the sharded optimizer state.
* **model** (*torch.nn.Module*) -- Refer to
:meth:"shard_full_optim_state_dict".
* **optim** (*torch.optim.Optimizer*) -- Optimizer for
"model" 's
* **parameters.** --
Returns:
Refer to "shard_full_optim_state_dict()".
Return type:
*Dict*[str, *Any*]
forward(args, *kwargs)
Runs the forward pass for the wrapped module, inserting FSDP-
specific pre- and post-forward sharding logic.
Return type:
*Any*
static fsdp_modules(module, root_only=False)
Returns all nested FSDP instances, possibly including "module"
itself and only including FSDP root modules if "root_only=True".
Parameters:
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
Parameters:
* module (torch.nn.Module) -- Root module, which may or
may not be an "FSDP" module.
* **root_only** (*bool*) -- Whether to return only FSDP root
modules. (Default: "False")
Returns:
FSDP modules that are nested in the input "module".
Return type:
List[FullyShardedDataParallel]
static full_optim_state_dict(model, optim, optim_input=None, rank0_only=True, group=None)
Consolidates the full optimizer state on rank 0 and returns it
as a "dict" following the convention of
"torch.optim.Optimizer.state_dict()", i.e. with keys ""state""
and ""param_groups"". The flattened parameters in "FSDP" modules
contained in "model" are mapped back to their unflattened
parameters.
Warning:
This needs to be called on all ranks since it uses collective
communications. However, if "rank0_only=True", then the state
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
dict is only populated on rank 0, and all other ranks return
an empty "dict".
Warning:
Unlike "torch.optim.Optimizer.state_dict()", this method uses
full parameter names as keys instead of parameter IDs.
Note:
Like in "torch.optim.Optimizer.state_dict()", the tensors
contained in the optimizer state dict are not cloned, so there
may be aliasing surprises. For best practices, consider saving
the returned optimizer state dict immediately, e.g. using
"torch.save()".
Parameters:
* **model** (*torch.nn.Module*) -- Root module (which may or
may not be a "FullyShardedDataParallel" instance) whose
parameters were passed into the optimizer "optim".
* **optim** (*torch.optim.Optimizer*) -- Optimizer for
"model" 's parameters.
* **optim_input**
(*Optional**[**Union**[**List**[**Dict**[**str**,
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
Any]], Iterable[torch.nn.Parameter]]]*)
-- Input passed into the optimizer "optim" representing
either a "list" of parameter groups or an iterable of
parameters; if "None", then this method assumes the input
was "model.parameters()". This argument is deprecated, and
there is no need to pass it in anymore. (Default: "None")
* **rank0_only** (*bool*) -- If "True", saves the populated
"dict" only on rank 0; if "False", saves it on all ranks.
(Default: "True")
* **group** (*dist.ProcessGroup*) -- Model's process group or
"None" if using the default process group. (Default:
"None")
Returns:
A "dict" containing the optimizer state for "model" 's
original unflattened parameters and including keys "state"
and "param_groups" following the convention of
"torch.optim.Optimizer.state_dict()". If "rank0_only=True",
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
then nonzero ranks return an empty "dict".
Return type:
Dict[str, Any]
property module: Module
Returns the wrapped module (like "DistributedDataParallel").
named_buffers(args, *kwargs)
Overrides "named_buffers()" to intercept buffer names and remove
all occurrences of the FSDP-specific flattened buffer prefix
when inside the "summon_full_params()" context manager.
Return type:
*Iterator*[*Tuple*[str, *Tensor*]]
named_parameters(args, *kwargs)
Overrides "named_parameters()" to intercept parameter names and
remove all occurrences of the FSDP-specific flattened parameter
prefix when inside the "summon_full_params()" context manager.
Return type:
*Iterator*[*Tuple*[str, *Parameter*]]
no_sync()
A context manager to disable gradient synchronizations across
FSDP instances. Within this context, gradients will be
accumulated in module variables, which will later be
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
synchronized in the first forward-backward pass after exiting
the context. This should only be used on the root FSDP instance
and will recursively apply to all children FSDP instances.
Note:
This likely results in higher memory usage because FSDP will
accumulate the full model gradients (instead of gradient
shards) until the eventual sync.
Note:
When used with CPU offloading, the gradients will not be
offloaded to CPU when inside the context manager. Instead,
they will only be offloaded right after the eventual sync.
Return type:
*Generator*
register_comm_hook(state, hook)
Registers a communication hook which is an enhancement that
provides a flexible hook to users where they can specify how
FSDP aggregates gradients across multiple workers. This hook can
be used to implement several algorithms like GossipGrad and
gradient compression which involve different communication
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
strategies for parameter syncs while training with
"FullyShardedDataParallel".
Warning:
FSDP communication hook should be registered before running an
initial forward pass and only once.
Parameters:
* **state** (*object*) --
Passed to the hook to maintain any state information during
the training process. Examples include error feedback in
gradient compression, peers to communicate with next in
GossipGrad, etc. It is locally stored by each worker and
shared by all the gradient tensors on the worker.
* **hook** (*Callable*) -- Callable, which has one of the
following signatures: 1) "hook: Callable[torch.Tensor] ->
None": This function takes in a Python tensor, which
represents the full, flattened, unsharded gradient with
respect to all variables corresponding to the model this
FSDP unit is wrapping (that are not wrapped by other FSDP
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
sub-units). It then performs all necessary processing and
returns "None"; 2) "hook: Callable[torch.Tensor,
torch.Tensor] -> None": This function takes in two Python
tensors, the first one represents the full, flattened,
unsharded gradient with respect to all variables
corresponding to the model this FSDP unit is wrapping (that
are not wrapped by other FSDP sub-units). The latter
represents a pre-sized tensor to store a chunk of a sharded
gradient after reduction. In both cases, callable performs
all necessary processing and returns "None". Callables with
signature 1 are expected to handle gradient communication
for a NO_SHARD case. Callables with signature 2 are
expected to handle gradient communication for sharded
cases.
static rekey_optim_state_dict(optim_state_dict, optim_state_key_type, model, optim_input=None, optim=None) | https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
Re-keys the optimizer state dict "optim_state_dict" to use the
key type "optim_state_key_type". This can be used to achieve
compatibility between optimizer state dicts from models with
FSDP instances and ones without.
To re-key an FSDP full optimizer state dict (i.e. from
"full_optim_state_dict()") to use parameter IDs and be loadable
to a non-wrapped model:
>>> wrapped_model, wrapped_optim = ...
>>> full_osd = FSDP.full_optim_state_dict(wrapped_model, wrapped_optim)
>>> nonwrapped_model, nonwrapped_optim = ...
>>> rekeyed_osd = FSDP.rekey_optim_state_dict(full_osd, OptimStateKeyType.PARAM_ID, nonwrapped_model)
>>> nonwrapped_optim.load_state_dict(rekeyed_osd)
To re-key a normal optimizer state dict from a non-wrapped model
to be loadable to a wrapped model:
>>> nonwrapped_model, nonwrapped_optim = ...
>>> osd = nonwrapped_optim.state_dict()
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
osd = nonwrapped_optim.state_dict()
>>> rekeyed_osd = FSDP.rekey_optim_state_dict(osd, OptimStateKeyType.PARAM_NAME, nonwrapped_model)
>>> wrapped_model, wrapped_optim = ...
>>> sharded_osd = FSDP.shard_full_optim_state_dict(rekeyed_osd, wrapped_model)
>>> wrapped_optim.load_state_dict(sharded_osd)
Returns:
The optimizer state dict re-keyed using the parameter keys
specified by "optim_state_key_type".
Return type:
Dict[str, Any]
static scatter_full_optim_state_dict(full_optim_state_dict, model, optim_input=None, optim=None, group=None)
Scatters the full optimizer state dict from rank 0 to all other
ranks, returning the sharded optimizer state dict on each rank.
The return value is the same as "shard_full_optim_state_dict()",
and on rank 0, the first argument should be the return value of
"full_optim_state_dict()".
Example:
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
"full_optim_state_dict()".
Example:
>>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
>>> model, optim = ...
>>> full_osd = FSDP.full_optim_state_dict(model, optim) # only non-empty on rank 0
>>> # Define new model with possibly different world size
>>> new_model, new_optim, new_group = ...
>>> sharded_osd = FSDP.scatter_full_optim_state_dict(full_osd, new_model, group=new_group)
>>> new_optim.load_state_dict(sharded_osd)
Note:
Both "shard_full_optim_state_dict()" and
"scatter_full_optim_state_dict()" may be used to get the
sharded optimizer state dict to load. Assuming that the full
optimizer state dict resides in CPU memory, the former
requires each rank to have the full dict in CPU memory, where
each rank individually shards the dict without any
communication, while the latter requires only rank 0 to have
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
the full dict in CPU memory, where rank 0 moves each shard to
GPU memory (for NCCL) and communicates it to ranks
appropriately. Hence, the former has higher aggregate CPU
memory cost, while the latter has higher communication cost.
Parameters:
* **full_optim_state_dict** (*Optional**[**Dict**[**str**,
**Any**]**]*) -- Optimizer state dict corresponding to the
unflattened parameters and holding the full non-sharded
optimizer state if on rank 0; the argument is ignored on
nonzero ranks.
* **model** (*torch.nn.Module*) -- Root module (which may or
may not be a "FullyShardedDataParallel" instance) whose
parameters correspond to the optimizer state in
"full_optim_state_dict".
* **optim_input**
(*Optional**[**Union**[**List**[**Dict**[**str**,
**Any**]**]**, **Iterable**[**torch.nn.Parameter**]**]**]*)
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
-- Input passed into the optimizer representing either a
"list" of parameter groups or an iterable of parameters; if
"None", then this method assumes the input was
"model.parameters()". This argument is deprecated, and
there is no need to pass it in anymore. (Default: "None")
* **optim** (*Optional**[**torch.optim.Optimizer**]*) --
Optimizer that will load the state dict returned by this
method. This is the preferred argument to use over
"optim_input". (Default: "None")
* **group** (*dist.ProcessGroup*) -- Model's process group or
"None" if using the default process group. (Default:
"None")
Returns:
The full optimizer state dict now remapped to flattened
parameters instead of unflattened parameters and restricted
to only include this rank's part of the optimizer state.
Return type:
Dict[str, Any]
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
Return type:
Dict[str, Any]
static set_state_dict_type(module, state_dict_type, state_dict_config=None)
Set the "state_dict_type" and the corresponding (optional)
configurations of all the descendant FSDP modules of the target
module. The target module does not have to be a FSDP module. If
the target module is a FSDP module, its "state_dict_type" will
also be changed.
Note:
This API should be called for only the top-level (root)
module.
Note:
This API enables users to transparently use the conventional
"state_dict" API to take model checkpoints in cases where the
root FSDP module is wrapped by another "nn.Module". For
example, the following will ensure "state_dict" is called on
all non-FSDP instances, while dispatching into
*sharded_state_dict* implementation for FSDP:
Example:
>>> model = DDP(FSDP(...))
>>> FSDP.set_state_dict_type(
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |