text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"None"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the weighted
mean of the output is taken, "'sum'": the output will be
summed. Note: "size_average" and "reduce" are in the process
of being deprecated, and in the meantime, specifying either of
those two args will override "reduction". Default: "'mean'"
Shape:
* Input: (N, C) or (C), where C = number of classes, or (N, C,
d_1, d_2, ..., d_K) with K \geq 1 in the case of
K-dimensional loss.
* Target: (N) or (), where each value is 0 \leq
\text{targets}[i] \leq C-1, or (N, d_1, d_2, ..., d_K) with K
\geq 1 in the case of K-dimensional loss.
| https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html | pytorch docs |
\geq 1 in the case of K-dimensional loss.
* Output: If "reduction" is "'none'", shape (N) or (N, d_1, d_2,
..., d_K) with K \geq 1 in the case of K-dimensional loss.
Otherwise, scalar.
Examples:
>>> m = nn.LogSoftmax(dim=1)
>>> loss = nn.NLLLoss()
>>> # input is of size N x C = 3 x 5
>>> input = torch.randn(3, 5, requires_grad=True)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.tensor([1, 0, 4])
>>> output = loss(m(input), target)
>>> output.backward()
>>>
>>>
>>> # 2D loss example (used, for example, with image inputs)
>>> N, C = 5, 4
>>> loss = nn.NLLLoss()
>>> # input is of size N x C x height x width
>>> data = torch.randn(N, 16, 10, 10)
>>> conv = nn.Conv2d(16, C, (3, 3))
>>> m = nn.LogSoftmax(dim=1)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
| https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html | pytorch docs |
output = loss(m(conv(data)), target)
>>> output.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html | pytorch docs |
CUDAPluggableAllocator
class torch.cuda.CUDAPluggableAllocator(path_to_so_file, alloc_fn_name, free_fn_name)
CUDA memory allocator loaded from a so file.
Memory allocators are compiled in .so files and loaded dynamically
using ctypes. To change the active allocator use the
"torch.memory.cuda.change_current_allocator()" function.
Parameters:
* path_to_so_file (str) -- Path in the filesystem to the
.so file containing the allocator functions
* **alloc_fn_name** (*str*) -- Name of the function to perform
the memory allocation in the so file. The signature must be:
void* alloc_fn_name(ssize_t size, int device, cudaStream_t
stream);
* **free_fn_name** (*str*) -- Name of the function to perform
the memory release in the so file. The signature must be: void
free_fn_name(void* ptr, size_t size, cudaStream_t stream);
Warning:
This is currently supported only in unix OSs
Note: | https://pytorch.org/docs/stable/generated/torch.cuda.CUDAPluggableAllocator.html | pytorch docs |
Note:
See Memory management for details on creating and using a custom
allocator
| https://pytorch.org/docs/stable/generated/torch.cuda.CUDAPluggableAllocator.html | pytorch docs |
torch.set_deterministic_debug_mode
torch.set_deterministic_debug_mode(debug_mode)
Sets the debug mode for deterministic operations.
Note:
This is an alternative interface for
"torch.use_deterministic_algorithms()". Refer to that function's
documentation for details about affected operations.
Parameters:
debug_mode (str or int) -- If "default" or 0, don't
error or warn on nondeterministic operations. If "warn" or 1,
warn on nondeterministic operations. If "error" or 2, error on
nondeterministic operations. | https://pytorch.org/docs/stable/generated/torch.set_deterministic_debug_mode.html | pytorch docs |
torch.Tensor.is_coalesced
Tensor.is_coalesced() -> bool
Returns "True" if "self" is a sparse COO tensor that is coalesced,
"False" otherwise.
Warning:
Throws an error if "self" is not a sparse COO tensor.
See "coalesce()" and uncoalesced tensors. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_coalesced.html | pytorch docs |
ReLU6
class torch.ao.nn.quantized.ReLU6(inplace=False)
Applies the element-wise function:
\text{ReLU6}(x) = \min(\max(x_0, x), q(6)), where x_0 is the
zero_point, and q(6) is the quantized representation of number 6.
Parameters:
inplace (bool) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (N, *) where *** means, any number of additional
dimensions
* Output: (N, *), same shape as the input
[image]
Examples:
>>> m = nn.quantized.ReLU6()
>>> input = torch.randn(2)
>>> input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ReLU6.html | pytorch docs |
torch.cumsum
torch.cumsum(input, dim, *, dtype=None, out=None) -> Tensor
Returns the cumulative sum of elements of "input" in the dimension
"dim".
For example, if "input" is a vector of size N, the result will also
be a vector of size N, with elements.
y_i = x_1 + x_2 + x_3 + \dots + x_i
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to do the operation over
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is casted
to "dtype" before the operation is performed. This is useful
for preventing data type overflows. Default: None.
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> a = torch.randn(10)
>>> a
tensor([-0.8286, -0.4890, 0.5155, 0.8443, 0.1865, -0.1752, -2.0595,
0.1850, -1.1571, -0.4243])
>>> torch.cumsum(a, dim=0)
| https://pytorch.org/docs/stable/generated/torch.cumsum.html | pytorch docs |
torch.cumsum(a, dim=0)
tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058,
-1.8209, -2.9780, -3.4022])
| https://pytorch.org/docs/stable/generated/torch.cumsum.html | pytorch docs |
torch.autograd.graph.Node.name
abstract Node.name()
Returns the name.
Example:
>>> import torch
>>> a = torch.tensor([0., 0., 0.], requires_grad=True)
>>> b = a.clone()
>>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)
>>> print(b.grad_fn.name())
CloneBackward0
Return type:
str | https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.name.html | pytorch docs |
set_multithreading_enabled
class torch.autograd.set_multithreading_enabled(mode)
Context-manager that sets multithreaded backwards on or off.
"set_multithreading_enabled" will enable or disable multithreaded
backwards based on its argument "mode". It can be used as a
context-manager or as a function.
This context manager is thread local; it will not affect
computation in other threads.
Parameters:
mode (bool) -- Flag whether to enable multithreaded
backwards ("True"), or disable ("False").
Note:
This API does not apply to forward-mode AD.
| https://pytorch.org/docs/stable/generated/torch.autograd.set_multithreading_enabled.html | pytorch docs |
torch.Tensor.is_pinned
Tensor.is_pinned()
Returns true if this tensor resides in pinned memory. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_pinned.html | pytorch docs |
torch.signal.windows.gaussian
torch.signal.windows.gaussian(M, *, std=1.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes a window with a gaussian waveform.
The gaussian window is defined as follows:
w_n = \exp{\left(-\left(\frac{n}{2\sigma}\right)^2\right)}
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* std (float, optional) -- the standard deviation of
the gaussian. It controls how narrow or wide the window is.
Default: 1.0.
* **sym** (*bool**, **optional*) -- If *False*, returns a
periodic window suitable for use in spectral analysis. If
*True*, returns a symmetric window suitable for use in filter
design. Default: *True*.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html | pytorch docs |
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric gaussian window with a standard deviation of 1.0.
>>> torch.signal.windows.gaussian(10)
| https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html | pytorch docs |
torch.signal.windows.gaussian(10)
tensor([4.0065e-05, 2.1875e-03, 4.3937e-02, 3.2465e-01, 8.8250e-01, 8.8250e-01, 3.2465e-01, 4.3937e-02, 2.1875e-03, 4.0065e-05])
>>> # Generates a periodic gaussian window and standard deviation equal to 0.9.
>>> torch.signal.windows.gaussian(10, sym=False,std=0.9)
tensor([1.9858e-07, 5.1365e-05, 3.8659e-03, 8.4658e-02, 5.3941e-01, 1.0000e+00, 5.3941e-01, 8.4658e-02, 3.8659e-03, 5.1365e-05])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html | pytorch docs |
torch.Tensor.isposinf
Tensor.isposinf() -> Tensor
See "torch.isposinf()" | https://pytorch.org/docs/stable/generated/torch.Tensor.isposinf.html | pytorch docs |
torch.Tensor.gather
Tensor.gather(dim, index) -> Tensor
See "torch.gather()" | https://pytorch.org/docs/stable/generated/torch.Tensor.gather.html | pytorch docs |
torch.linalg.lu_factor
torch.linalg.lu_factor(A, *, bool pivot=True, out=None) -> (Tensor, Tensor)
Computes a compact representation of the LU factorization with
partial pivoting of a matrix.
This function computes a compact representation of the
decomposition given by "torch.linalg.lu()". If the matrix is
square, this representation may be used in
"torch.linalg.lu_solve()" to solve system of linear equations that
share the matrix "A".
The returned decomposition is represented as a named tuple (LU,
pivots). The "LU" matrix has the same shape as the input matrix
"A". Its upper and lower triangular parts encode the non-constant
elements of "L" and "U" of the LU decomposition of "A".
The returned permutation matrix is represented by a 1-indexed
vector. pivots[i] == j represents that in the i-th step of the
algorithm, the i-th row was permuted with the j-1-th row.
On CUDA, one may use "pivot"= False. In this case, this function | https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html | pytorch docs |
returns the LU decomposition without pivoting if it exists.
Supports inputs of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if the inputs are batches of
matrices then the output has the same batch dimensions.
Note:
When inputs are on a CUDA device, this function synchronizes that
device with the CPU. For a version of this function that does not
synchronize, see "torch.linalg.lu_factor_ex()".
Warning:
The LU decomposition is almost never unique, as often there are
different permutation matrices that can yield different LU
decompositions. As such, different platforms, like SciPy, or
inputs on different devices, may produce different valid
decompositions.Gradient computations are only supported if the
input matrix is full-rank. If this condition is not met, no error
will be thrown, but the gradient may not be finite. This is
because the LU decomposition with pivoting is not differentiable
| https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html | pytorch docs |
at these points.
See also:
"torch.linalg.lu_solve()" solves a system of linear equations
given the output of this function provided the input matrix was
square and invertible.
"torch.lu_unpack()" unpacks the tensors returned by "lu_factor()"
into the three matrices *P, L, U* that form the decomposition.
"torch.linalg.lu()" computes the LU decomposition with partial
pivoting of a possibly non-square matrix. It is a composition of
"lu_factor()" and "torch.lu_unpack()".
"torch.linalg.solve()" solves a system of linear equations. It is
a composition of "lu_factor()" and "lu_solve()".
Parameters:
A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
Keyword Arguments:
* pivot (bool, optional) -- Whether to compute the LU
decomposition with partial pivoting, or the regular LU
decomposition. "pivot"= False not supported on CPU. Default:
True. | https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html | pytorch docs |
True.
* **out** (*tuple**, **optional*) -- tuple of two tensors to
write the output to. Ignored if *None*. Default: *None*.
Returns:
A named tuple (LU, pivots).
Raises:
RuntimeError -- if the "A" matrix is not invertible or any
matrix in a batched "A" is not invertible.
Examples:
>>> A = torch.randn(2, 3, 3)
>>> B1 = torch.randn(2, 3, 4)
>>> B2 = torch.randn(2, 3, 7)
>>> A_factor = torch.linalg.lu_factor(A)
>>> X1 = torch.linalg.lu_solve(A_factor, B1)
>>> X2 = torch.linalg.lu_solve(A_factor, B2)
>>> torch.allclose(A @ X1, B1)
True
>>> torch.allclose(A @ X2, B2)
True
| https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html | pytorch docs |
torch.logical_not
torch.logical_not(input, *, out=None) -> Tensor
Computes the element-wise logical NOT of the given input tensor. If
not specified, the output tensor will have the bool dtype. If the
input tensor is not a bool tensor, zeros are treated as "False" and
non-zeros are treated as "True".
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.logical_not(torch.tensor([True, False]))
tensor([False, True])
>>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16))
tensor([1, 0, 0], dtype=torch.int16)
| https://pytorch.org/docs/stable/generated/torch.logical_not.html | pytorch docs |
LazyConvTranspose3d
class torch.nn.LazyConvTranspose3d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
A "torch.nn.ConvTranspose3d" module with lazy initialization of the
"in_channels" argument of the "ConvTranspose3d" that is inferred
from the "input.size(1)". The attributes that will be lazily
initialized are weight and bias.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* out_channels (int) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- "dilation *
(kernel_size - 1) - padding" zero-padding will be added to
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose3d.html | pytorch docs |
both sides of each dimension in the input. Default: 0
* **output_padding** (*int** or **tuple**, **optional*) --
Additional size added to one side of each dimension in the
output shape. Default: 0
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
See also:
"torch.nn.ConvTranspose3d" and
"torch.nn.modules.lazy.LazyModuleMixin"
cls_to_become
alias of "ConvTranspose3d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose3d.html | pytorch docs |
torch.Tensor.logit
Tensor.logit() -> Tensor
See "torch.logit()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logit.html | pytorch docs |
torch.nn.functional.hardtanh_
torch.nn.functional.hardtanh_(input, min_val=- 1., max_val=1.) -> Tensor
In-place version of "hardtanh()". | https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh_.html | pytorch docs |
torch.cuda.reset_max_memory_allocated
torch.cuda.reset_max_memory_allocated(device=None)
Resets the starting point in tracking maximum GPU memory occupied
by tensors for a given device.
See "max_memory_allocated()" for details.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Warning:
This function now calls "reset_peak_memory_stats()", which resets
/all/ peak memory stats.
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.reset_max_memory_allocated.html | pytorch docs |
torch.Tensor.dense_dim
Tensor.dense_dim() -> int
Return the number of dense dimensions in a sparse tensor "self".
Note:
Returns "len(self.shape)" if "self" is not a sparse tensor.
See also "Tensor.sparse_dim()" and hybrid tensors. | https://pytorch.org/docs/stable/generated/torch.Tensor.dense_dim.html | pytorch docs |
torch.Tensor.expm1_
Tensor.expm1_() -> Tensor
In-place version of "expm1()" | https://pytorch.org/docs/stable/generated/torch.Tensor.expm1_.html | pytorch docs |
torch.cuda.initial_seed
torch.cuda.initial_seed()
Returns the current random seed of the current GPU.
Warning:
This function eagerly initializes CUDA.
Return type:
int | https://pytorch.org/docs/stable/generated/torch.cuda.initial_seed.html | pytorch docs |
torch.Tensor.pow_
Tensor.pow_(exponent) -> Tensor
In-place version of "pow()" | https://pytorch.org/docs/stable/generated/torch.Tensor.pow_.html | pytorch docs |
PruningContainer
class torch.nn.utils.prune.PruningContainer(*args)
Container holding a sequence of pruning methods for iterative
pruning. Keeps track of the order in which pruning methods are
applied and handles combining successive pruning calls.
Accepts as argument an instance of a BasePruningMethod or an
iterable of them.
add_pruning_method(method)
Adds a child pruning "method" to the container.
Parameters:
**method** (*subclass of BasePruningMethod*) -- child pruning
method to be added to the container.
classmethod apply(module, name, args, importance_scores=None, *kwargs)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
* **module** (*nn.Module*) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html | pytorch docs |
pruning will act.
* **args** -- arguments passed on to a subclass of
"BasePruningMethod"
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as module parameter) used
to compute mask for pruning. The values in this tensor
indicate the importance of the corresponding elements in
the parameter being pruned. If unspecified or None, the
parameter will be used in its place.
* **kwargs** -- keyword arguments passed on to a subclass of
a "BasePruningMethod"
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html | pytorch docs |
pruned version of the input tensor
Return type:
pruned_tensor (torch.Tensor)
compute_mask(t, default_mask)
Applies the latest "method" by computing the new partial masks
and returning its combination with the "default_mask". The new
partial mask should be computed on the entries or channels that
were not zeroed out by the "default_mask". Which portions of the
tensor "t" the new mask will be calculated from depends on the
"PRUNING_TYPE" (handled by the type handler):
* for 'unstructured', the mask will be computed from the raveled
list of nonmasked entries;
* for 'structured', the mask will be computed from the nonmasked
channels in the tensor;
* for 'global', the mask will be computed across all entries.
Parameters:
* **t** (*torch.Tensor*) -- tensor representing the parameter
to prune (of same dimensions as "default_mask").
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html | pytorch docs |
default_mask (torch.Tensor) -- mask from previous
pruning iteration.
Returns:
new mask that combines the effects of the "default_mask" and
the new mask from the current pruning "method" (of same
dimensions as "default_mask" and "t").
Return type:
mask (torch.Tensor)
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html | pytorch docs |
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
Returns:
pruned version of tensor "t".
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html | pytorch docs |
torch.permute
torch.permute(input, dims) -> Tensor
Returns a view of the original tensor "input" with its dimensions
permuted.
Parameters:
* input (Tensor) -- the input tensor.
* **dims** (*tuple of python:int*) -- The desired ordering of
dimensions
-[ Example ]-
x = torch.randn(2, 3, 5)
x.size()
torch.Size([2, 3, 5])
torch.permute(x, (2, 0, 1)).size()
torch.Size([5, 2, 3])
| https://pytorch.org/docs/stable/generated/torch.permute.html | pytorch docs |
torch.Tensor.le_
Tensor.le_(other) -> Tensor
In-place version of "le()". | https://pytorch.org/docs/stable/generated/torch.Tensor.le_.html | pytorch docs |
torch.movedim
torch.movedim(input, source, destination) -> Tensor
Moves the dimension(s) of "input" at the position(s) in "source" to
the position(s) in "destination".
Other dimensions of "input" that are not explicitly moved remain in
their original order and appear at the positions not specified in
"destination".
Parameters:
* input (Tensor) -- the input tensor.
* **source** (*int** or **tuple of ints*) -- Original positions
of the dims to move. These must be unique.
* **destination** (*int** or **tuple of ints*) -- Destination
positions for each of the original dims. These must also be
unique.
Examples:
>>> t = torch.randn(3,2,1)
>>> t
tensor([[[-0.3362],
[-0.8437]],
[[-0.9627],
[ 0.1727]],
[[ 0.5173],
[-0.1398]]])
>>> torch.movedim(t, 1, 0).shape
torch.Size([2, 3, 1])
>>> torch.movedim(t, 1, 0)
| https://pytorch.org/docs/stable/generated/torch.movedim.html | pytorch docs |
torch.movedim(t, 1, 0)
tensor([[[-0.3362],
[-0.9627],
[ 0.5173]],
[[-0.8437],
[ 0.1727],
[-0.1398]]])
>>> torch.movedim(t, (1, 2), (0, 1)).shape
torch.Size([2, 1, 3])
>>> torch.movedim(t, (1, 2), (0, 1))
tensor([[[-0.3362, -0.9627, 0.5173]],
[[-0.8437, 0.1727, -0.1398]]])
| https://pytorch.org/docs/stable/generated/torch.movedim.html | pytorch docs |
CustomFromMask
class torch.nn.utils.prune.CustomFromMask(mask)
classmethod apply(module, name, mask)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
* **module** (*nn.Module*) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
Return type:
pruned_tensor (torch.Tensor)
prune(t, default_mask=None, importance_scores=None) | https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html | pytorch docs |
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
Returns:
pruned version of tensor "t".
remove(module) | https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html | pytorch docs |
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html | pytorch docs |
torch.foreach_expm1
torch.foreach_expm1(self: List[Tensor]) -> None
Apply "torch.expm1()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_expm1_.html | pytorch docs |
torch.Tensor.greater
Tensor.greater(other) -> Tensor
See "torch.greater()". | https://pytorch.org/docs/stable/generated/torch.Tensor.greater.html | pytorch docs |
torch.linalg.eigvalsh
torch.linalg.eigvalsh(A, UPLO='L', *, out=None) -> Tensor
Computes the eigenvalues of a complex Hermitian or real symmetric
matrix.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the eigenvalues
of a complex Hermitian or real symmetric matrix A \in
\mathbb{K}^{n \times n} are defined as the roots (counted with
multiplicity) of the polynomial p of degree n given by
p(\lambda) = \operatorname{det}(A - \lambda
\mathrm{I}_n)\mathrlap{\qquad \lambda \in \mathbb{R}}
where \mathrm{I}_n is the n-dimensional identity matrix. The
eigenvalues of a real symmetric or complex Hermitian matrix are
always real.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
The eigenvalues are returned in ascending order.
"A" is assumed to be Hermitian (resp. symmetric), but this is not | https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html | pytorch docs |
checked internally, instead:
If "UPLO"= 'L' (default), only the lower triangular part of the
matrix is used in the computation.
If "UPLO"= 'U', only the upper triangular part of the matrix is
used.
Note:
When inputs are on a CUDA device, this function synchronizes that
device with the CPU.
See also:
"torch.linalg.eigh()" computes the full eigenvalue decomposition.
Parameters:
* A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions consisting of symmetric or
Hermitian matrices.
* **UPLO** (*'L'**, **'U'**, **optional*) -- controls whether to
use the upper or lower triangular part of "A" in the
computations. Default: *'L'*.
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Returns:
A real-valued tensor containing the eigenvalues even when "A" is | https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html | pytorch docs |
complex. The eigenvalues are returned in ascending order.
Examples:
>>> A = torch.randn(2, 2, dtype=torch.complex128)
>>> A = A + A.T.conj() # creates a Hermitian matrix
>>> A
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> torch.linalg.eigvalsh(A)
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> A = torch.randn(3, 2, 2, dtype=torch.float64)
>>> A = A + A.mT # creates a batch of symmetric matrices
>>> torch.linalg.eigvalsh(A)
tensor([[ 2.5797, 3.4629],
[-4.1605, 1.3780],
[-3.1113, 2.7381]], dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html | pytorch docs |
torch.nn.functional.adaptive_max_pool3d
torch.nn.functional.adaptive_max_pool3d(args, *kwargs)
Applies a 3D adaptive max pooling over an input signal composed of
several input planes.
See "AdaptiveMaxPool3d" for details and output shape.
Parameters:
* output_size -- the target output size (single integer or
triple-integer tuple)
* **return_indices** -- whether to return pooling indices.
Default: "False"
| https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool3d.html | pytorch docs |
torch.mv
torch.mv(input, vec, *, out=None) -> Tensor
Performs a matrix-vector product of the matrix "input" and the
vector "vec".
If "input" is a (n \times m) tensor, "vec" is a 1-D tensor of size
m, "out" will be 1-D of size n.
Note:
This function does not broadcast.
Parameters:
* input (Tensor) -- matrix to be multiplied
* **vec** (*Tensor*) -- vector to be multiplied
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> mat = torch.randn(2, 3)
>>> vec = torch.randn(3)
>>> torch.mv(mat, vec)
tensor([ 1.0404, -0.6361])
| https://pytorch.org/docs/stable/generated/torch.mv.html | pytorch docs |
torch.Tensor.median
Tensor.median(dim=None, keepdim=False)
See "torch.median()" | https://pytorch.org/docs/stable/generated/torch.Tensor.median.html | pytorch docs |
default_qat_qconfig
torch.quantization.qconfig.default_qat_qconfig
alias of QConfig(activation=functools.partial(,
observer=,
quant_min=0, quant_max=255, dtype=torch.quint8,
qscheme=torch.per_tensor_affine, reduce_range=True){},
weight=functools.partial(,
observer=,
quant_min=-128, quant_max=127, dtype=torch.qint8,
qscheme=torch.per_tensor_symmetric, reduce_range=False){}) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qat_qconfig.html | pytorch docs |
torch.Tensor.set_
Tensor.set_(source=None, storage_offset=0, size=None, stride=None) -> Tensor
Sets the underlying storage, size, and strides. If "source" is a
tensor, "self" tensor will share the same storage and have the same
size and strides as "source". Changes to elements in one tensor
will be reflected in the other.
If "source" is a "Storage", the method sets the underlying storage,
offset, size, and stride.
Parameters:
* source (Tensor or Storage) -- the tensor or storage
to use
* **storage_offset** (*int**, **optional*) -- the offset in the
storage
* **size** (*torch.Size**, **optional*) -- the desired size.
Defaults to the size of the source.
* **stride** (*tuple**, **optional*) -- the desired stride.
Defaults to C-contiguous strides.
| https://pytorch.org/docs/stable/generated/torch.Tensor.set_.html | pytorch docs |
torch.amax
torch.amax(input, dim, keepdim=False, *, out=None) -> Tensor
Returns the maximum value of each slice of the "input" tensor in
the given dimension(s) "dim".
Note:
The difference between "max"/"min" and "amax"/"amin" is:
* "amax"/"amin" supports reducing on multiple dimensions,
* "amax"/"amin" does not return indices,
* "amax"/"amin" evenly distributes gradient between equal
values, while "max(dim)"/"min(dim)" propagates gradient only
to a single index in the source tensor.
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints*) -- the dimension or
dimensions to reduce.
| https://pytorch.org/docs/stable/generated/torch.amax.html | pytorch docs |
dimensions to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.8177, 1.4878, -0.2491, 0.9130],
[-0.7158, 1.1775, 2.0992, 0.4817],
[-0.0053, 0.0164, -1.3738, -0.0507],
[ 1.9700, 1.1106, -1.0318, -1.0816]])
>>> torch.amax(a, 1)
tensor([1.4878, 2.0992, 0.0164, 1.9700])
| https://pytorch.org/docs/stable/generated/torch.amax.html | pytorch docs |
torch.cuda.manual_seed
torch.cuda.manual_seed(seed)
Sets the seed for generating random numbers for the current GPU.
It's safe to call this function if CUDA is not available; in that
case, it is silently ignored.
Parameters:
seed (int) -- The desired seed.
Warning:
If you are working with a multi-GPU model, this function is
insufficient to get determinism. To seed all GPUs, use
"manual_seed_all()".
| https://pytorch.org/docs/stable/generated/torch.cuda.manual_seed.html | pytorch docs |
torch.lobpcg
torch.lobpcg(A, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None, tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None)
Find the k largest (or smallest) eigenvalues and the corresponding
eigenvectors of a symmetric positive definite generalized
eigenvalue problem using matrix-free LOBPCG methods.
This function is a front-end to the following LOBPCG algorithms
selectable via method argument:
*method="basic"* - the LOBPCG method introduced by Andrew
Knyazev, see [Knyazev2001]. A less robust method, may fail when
Cholesky is applied to singular input.
*method="ortho"* - the LOBPCG method with orthogonal basis
selection [StathopoulosEtal2002]. A robust method.
Supported inputs are dense, sparse, and batches of dense matrices.
Note:
In general, the basic method spends least time per iteration.
However, the robust methods converge much faster and are more
| https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
stable. So, the usage of the basic method is generally not
recommended but there exist cases where the usage of the basic
method may be preferred.
Warning:
The backward method does not support sparse and complex inputs.
It works only when *B* is not provided (i.e. *B == None*). We are
actively working on extensions, and the details of the algorithms
are going to be published promptly.
Warning:
While it is assumed that *A* is symmetric, *A.grad* is not. To
make sure that *A.grad* is symmetric, so that *A - t * A.grad* is
symmetric in first-order optimization routines, prior to running
*lobpcg* we do the following symmetrization map: *A -> (A +
A.t()) / 2*. The map is performed only when the *A* requires
gradients.
Parameters:
* A (Tensor) -- the input tensor of size (*, m, m)
* **B** (*Tensor**, **optional*) -- the input tensor of size (*,
m, m). When not specified, *B* is interpreted as identity
| https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
matrix.
* **X** (*tensor**, **optional*) -- the input tensor of size (*,
m, n) where *k <= n <= m*. When specified, it is used as
initial approximation of eigenvectors. X must be a dense
tensor.
* **iK** (*tensor**, **optional*) -- the input tensor of size
(*, m, m). When specified, it will be used as preconditioner.
* **k** (*integer**, **optional*) -- the number of requested
eigenpairs. Default is the number of X columns (when
specified) or *1*.
* **n** (*integer**, **optional*) -- if X is not specified then
*n* specifies the size of the generated random approximation
of eigenvectors. Default value for *n* is *k*. If X is
specified, the value of *n* (when specified) must be the
number of X columns.
* **tol** (*float**, **optional*) -- residual tolerance for
stopping criterion. Default is *feps ** 0.5* where *feps* is
| https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
smallest non-zero floating-point number of the given input
tensor A data type.
* **largest** (*bool**, **optional*) -- when True, solve the
eigenproblem for the largest eigenvalues. Otherwise, solve the
eigenproblem for smallest eigenvalues. Default is *True*.
* **method** (*str**, **optional*) -- select LOBPCG method. See
the description of the function above. Default is "ortho".
* **niter** (*int**, **optional*) -- maximum number of
iterations. When reached, the iteration process is hard-
stopped and the current approximation of eigenpairs is
returned. For infinite iteration but until convergence
criteria is met, use *-1*.
* **tracker** (*callable**, **optional*) --
a function for tracing the iteration process. When specified,
it is called at each iteration step with LOBPCG instance as an
argument. The LOBPCG instance holds the full state of the
| https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
iteration process in the following attributes:
*iparams*, *fparams*, *bparams* - dictionaries of integer,
float, and boolean valued input parameters, respectively
*ivars*, *fvars*, *bvars*, *tvars* - dictionaries of
integer, float, boolean, and Tensor valued iteration
variables, respectively.
*A*, *B*, *iK* - input Tensor arguments.
*E*, *X*, *S*, *R* - iteration Tensor variables.
For instance:
*ivars["istep"]* - the current iteration step *X* - the
current approximation of eigenvectors *E* - the current
approximation of eigenvalues *R* - the current residual
*ivars["converged_count"]* - the current number of
converged eigenpairs *tvars["rerr"]* - the current state of
convergence criteria
Note that when *tracker* stores Tensor objects from the LOBPCG
instance, it must make copies of these.
| https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
instance, it must make copies of these.
If *tracker* sets *bvars["force_stop"] = True*, the iteration
process will be hard-stopped.
* **ortho_iparams** (*dict**, **optional*) -- various parameters
to LOBPCG algorithm when using *method="ortho"*.
* **ortho_fparams** (*dict**, **optional*) -- various parameters
to LOBPCG algorithm when using *method="ortho"*.
* **ortho_bparams** (*dict**, **optional*) -- various parameters
to LOBPCG algorithm when using *method="ortho"*.
Returns:
tensor of eigenvalues of size (*, k)
X (Tensor): tensor of eigenvectors of size (*, m, k)
Return type:
E (Tensor)
-[ References ]-
[Knyazev2001] Andrew V. Knyazev. (2001) Toward the Optimal
Preconditioned Eigensolver: Locally Optimal Block Preconditioned
Conjugate Gradient Method. SIAM J. Sci. Comput., 23(2), 517-541.
(25 pages) https://epubs.siam.org/doi/abs/10.1137/S1064827500366124 | https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
[StathopoulosEtal2002] Andreas Stathopoulos and Kesheng Wu. (2002)
A Block Orthogonalization Procedure with Constant Synchronization
Requirements. SIAM J. Sci. Comput., 23(6), 2165-2182. (18 pages)
https://epubs.siam.org/doi/10.1137/S1064827500370883
[DuerschEtal2018] Jed A. Duersch, Meiyue Shao, Chao Yang, Ming Gu.
(2018) A Robust and Efficient Implementation of LOBPCG. SIAM J.
Sci. Comput., 40(5), C655-C676. (22 pages)
https://epubs.siam.org/doi/abs/10.1137/17M1129830 | https://pytorch.org/docs/stable/generated/torch.lobpcg.html | pytorch docs |
torch.Tensor.movedim
Tensor.movedim(source, destination) -> Tensor
See "torch.movedim()" | https://pytorch.org/docs/stable/generated/torch.Tensor.movedim.html | pytorch docs |
torch.signal.windows.general_hamming
torch.signal.windows.general_hamming(M, *, alpha=0.54, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the general Hamming window.
The general Hamming window is defined as follows:
w_n = \alpha - (1 - \alpha) \cos{ \left( \frac{2 \pi n}{M-1}
\right)}
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* alpha (float, optional) -- the window coefficient.
Default: 0.54.
* **sym** (*bool**, **optional*) -- If *False*, returns a
periodic window suitable for use in spectral analysis. If
*True*, returns a symmetric window suitable for use in filter
design. Default: *True*.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html | pytorch docs |
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric Hamming window with the general Hamming window.
>>> torch.signal.windows.general_hamming(10, sym=True)
| https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html | pytorch docs |
tensor([0.0800, 0.1876, 0.4601, 0.7700, 0.9723, 0.9723, 0.7700, 0.4601, 0.1876, 0.0800])
>>> # Generates a periodic Hann window with the general Hamming window.
>>> torch.signal.windows.general_hamming(10, alpha=0.5, sym=False)
tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html | pytorch docs |
torch.Tensor.arctanh
Tensor.arctanh() -> Tensor
See "torch.arctanh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arctanh.html | pytorch docs |
torch.Tensor.less_equal_
Tensor.less_equal_(other) -> Tensor
In-place version of "less_equal()". | https://pytorch.org/docs/stable/generated/torch.Tensor.less_equal_.html | pytorch docs |
torch.Tensor.lu_solve
Tensor.lu_solve(LU_data, LU_pivots) -> Tensor
See "torch.lu_solve()" | https://pytorch.org/docs/stable/generated/torch.Tensor.lu_solve.html | pytorch docs |
torch.addcdiv
torch.addcdiv(input, tensor1, tensor2, *, value=1, out=None) -> Tensor
Performs the element-wise division of "tensor1" by "tensor2",
multiplies the result by the scalar "value" and adds it to "input".
Warning:
Integer division with addcdiv is no longer supported, and in a
future release addcdiv will perform a true division of tensor1
and tensor2. The historic addcdiv behavior can be implemented as
(input + value * torch.trunc(tensor1 / tensor2)).to(input.dtype)
for integer inputs and as (input + value * tensor1 / tensor2) for
float inputs. The future addcdiv behavior is just the latter
implementation: (input + value * tensor1 / tensor2), for all
dtypes.
\text{out}_i = \text{input}_i + \text{value} \times
\frac{\text{tensor1}_i}{\text{tensor2}_i}
The shapes of "input", "tensor1", and "tensor2" must be
broadcastable.
For inputs of type FloatTensor or DoubleTensor, "value" must be | https://pytorch.org/docs/stable/generated/torch.addcdiv.html | pytorch docs |
a real number, otherwise an integer.
Parameters:
* input (Tensor) -- the tensor to be added
* **tensor1** (*Tensor*) -- the numerator tensor
* **tensor2** (*Tensor*) -- the denominator tensor
Keyword Arguments:
* value (Number, optional) -- multiplier for
\text{tensor1} / \text{tensor2}
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
>>> torch.addcdiv(t, t1, t2, value=0.1)
tensor([[-0.2312, -3.6496, 0.1312],
[-1.0428, 3.4292, -0.1030],
[-0.5369, -0.9829, 0.0430]])
| https://pytorch.org/docs/stable/generated/torch.addcdiv.html | pytorch docs |
VerificationOptions
class torch.onnx.verification.VerificationOptions(flatten=True, ignore_none=True, check_shape=True, check_dtype=True, backend=OnnxBackend.ONNX_RUNTIME_CPU, rtol=0.001, atol=1e-07, remained_onnx_input_idx=None, acceptable_error_percentage=None)
Options for ONNX export verification.
Variables:
* flatten (bool) -- If True, unpack nested list/tuple/dict
inputs into a flattened list of Tensors for ONNX. Set this to
False if nested structures are to be preserved for ONNX, which
is usually the case with exporting ScriptModules. Default
True.
* **ignore_none** (*bool*) -- Whether to ignore None type in
torch output, which is usually the case with tracing. Set this
to False, if torch output should keep None type, which is
usually the case with exporting ScriptModules. Default to
True.
* **check_shape** (*bool*) -- Whether to check the shapes
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html | pytorch docs |
between PyTorch and ONNX Runtime outputs are exactly the same.
Set this to False to allow output shape broadcasting. Default
to True.
* **check_dtype** (*bool*) -- Whether to check the dtypes
between PyTorch and ONNX Runtime outputs are consistent.
Default to True.
* **backend** (*torch.onnx.verification.OnnxBackend*) -- ONNX
backend for verification. Default to
OnnxBackend.ONNX_RUNTIME_CPU.
* **rtol** (*float*) -- relative tolerance in comparison between
ONNX and PyTorch outputs.
* **atol** (*float*) -- absolute tolerance in comparison between
ONNX and PyTorch outputs.
* **remained_onnx_input_idx**
(*Optional**[**Sequence**[**int**]**]*) -- If provided, only
the specified inputs will be passed to the ONNX model. Supply
a list when there are unused inputs in the model. Since unused
inputs will be removed in the exported ONNX model, supplying
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html | pytorch docs |
all inputs will cause an error on unexpected inputs. This
parameter tells the verifier which inputs to pass into the
ONNX model.
* **acceptable_error_percentage** (*Optional**[**float**]*) --
acceptable percentage of element mismatches in comparison. It
should be a float of value between 0.0 and 1.0.
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html | pytorch docs |
torch.foreach_floor
torch.foreach_floor(self: List[Tensor]) -> None
Apply "torch.floor()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_floor_.html | pytorch docs |
torch.Tensor.true_divide
Tensor.true_divide(value) -> Tensor
See "torch.true_divide()" | https://pytorch.org/docs/stable/generated/torch.Tensor.true_divide.html | pytorch docs |
torch.Tensor.isinf
Tensor.isinf() -> Tensor
See "torch.isinf()" | https://pytorch.org/docs/stable/generated/torch.Tensor.isinf.html | pytorch docs |
torch.sqrt
torch.sqrt(input, *, out=None) -> Tensor
Returns a new tensor with the square-root of the elements of
"input".
\text{out}_{i} = \sqrt{\text{input}_{i}}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-2.0755, 1.0226, 0.0831, 0.4806])
>>> torch.sqrt(a)
tensor([ nan, 1.0112, 0.2883, 0.6933])
| https://pytorch.org/docs/stable/generated/torch.sqrt.html | pytorch docs |
torch.func.stack_module_state
torch.func.stack_module_state(models) -> params, buffers
Prepares a list of torch.nn.Modules for ensembling with "vmap()".
Given a list of "M" "nn.Modules" of the same class, returns two
dictionaries that stack all of their parameters and buffers
together, indexed by name. The stacked parameters are optimizable
(i.e. they are new leaf nodes in the autograd history that are
unrelated to the original parameters and can be passed directly to
an optimizer).
Here's an example of how to ensemble over a very simple model:
num_models = 5
batch_size = 64
in_features, out_features = 3, 3
models = [torch.nn.Linear(in_features, out_features) for i in range(num_models)]
data = torch.randn(batch_size, 3)
def wrapper(params, buffers, data):
return torch.func.functional_call(model[0], (params, buffers), data)
params, buffers = stack_module_state(models)
| https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html | pytorch docs |
params, buffers = stack_module_state(models)
output = vmap(wrapper, (0, 0, None))(params, buffers, data)
assert output.shape == (num_models, batch_size, out_features)
When there's submodules, this follows state dict naming conventions
import torch.nn as nn
class Foo(nn.Module):
def __init__(self, in_features, out_features):
super().__init__()
hidden = 4
self.l1 = nn.Linear(in_features, hidden)
self.l2 = nn.Linear(hidden, out_features)
def forward(self, x):
return self.l2(self.l1(x))
num_models = 5
in_features, out_features = 3, 3
models = [Foo(in_features, out_features) for i in range(num_models)]
params, buffers = stack_module_state(models)
print(list(params.keys())) # "l1.weight", "l1.bias", "l2.weight", "l2.bias"
Warning:
All of the modules being stacked together must be the same
| https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html | pytorch docs |
(except for the values of their parameters/buffers). For example,
they should be in the same mode (training vs eval).
Return type:
Tuple[Dict[str, Any], Dict[str, Any]] | https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html | pytorch docs |
swap_module
class torch.quantization.swap_module(mod, mapping, custom_module_class_mapping)
Swaps the module if it has a quantized counterpart and it has an
observer attached.
Parameters:
* mod -- input module
* **mapping** -- a dictionary that maps from nn module to nnq
module
Returns:
The corresponding quantized module of mod | https://pytorch.org/docs/stable/generated/torch.quantization.swap_module.html | pytorch docs |
ConvReLU2d
class torch.ao.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
A ConvReLU2d module is a fused module of Conv2d and ReLU
We adopt the same interface as "torch.ao.nn.quantized.Conv2d".
Variables:
torch.ao.nn.quantized.Conv2d (Same as) -- | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU2d.html | pytorch docs |
torch.narrow
torch.narrow(input, dim, start, length) -> Tensor
Returns a new tensor that is a narrowed version of "input" tensor.
The dimension "dim" is input from "start" to "start + length". The
returned tensor and "input" tensor share the same underlying
storage.
Parameters:
* input (Tensor) -- the tensor to narrow
* **dim** (*int*) -- the dimension along which to narrow
* **start** (*int** or **Tensor*) -- index of the element to
start the narrowed dimension from. Can be negative, which
means indexing from the end of *dim*. If *Tensor*, it must be
an 0-dim integral *Tensor* (bools not allowed)
* **length** (*int*) -- length of the narrowed dimension, must
be weakly positive
Example:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> torch.narrow(x, 0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> torch.narrow(x, 1, 1, 2)
tensor([[ 2, 3],
| https://pytorch.org/docs/stable/generated/torch.narrow.html | pytorch docs |
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]])
>>> torch.narrow(x, -1, torch.tensor(-1), 1)
tensor([[3],
[6],
[9]]) | https://pytorch.org/docs/stable/generated/torch.narrow.html | pytorch docs |
float16_static_qconfig
torch.quantization.qconfig.float16_static_qconfig
alias of QConfig(activation=functools.partial(,
dtype=torch.float16){}, weight=functools.partial(,
dtype=torch.float16){}) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float16_static_qconfig.html | pytorch docs |
torch.nn.functional.dropout2d
torch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False)
Randomly zero out entire channels (a channel is a 2D feature map,
e.g., the j-th channel of the i-th sample in the batched input is a
2D tensor \text{input}[i, j]) of the input tensor). Each channel
will be zeroed out independently on every forward call with
probability "p" using samples from a Bernoulli distribution.
See "Dropout2d" for details.
Parameters:
* p (float) -- probability of a channel to be zeroed.
Default: 0.5
* **training** (*bool*) -- apply dropout if is "True". Default:
"True"
* **inplace** (*bool*) -- If set to "True", will do this
operation in-place. Default: "False"
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout2d.html | pytorch docs |
torch.Tensor.cummax
Tensor.cummax(dim)
See "torch.cummax()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cummax.html | pytorch docs |
torch.nn.functional.upsample_bilinear
torch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None)
Upsamples the input, using bilinear upsampling.
Warning:
This function is deprecated in favor of
"torch.nn.functional.interpolate()". This is equivalent with
"nn.functional.interpolate(..., mode='bilinear',
align_corners=True)".
Expected inputs are spatial (4 dimensional). Use
upsample_trilinear fo volumetric (5 dimensional) inputs.
Parameters:
* input (Tensor) -- input
* **size** (*int** or **Tuple**[**int**, **int**]*) -- output
spatial size.
* **scale_factor** (*int** or **Tuple**[**int**, **int**]*) --
multiplier for spatial size
Note:
This operation may produce nondeterministic gradients when given
tensors on a CUDA device. See Reproducibility for more
information.
| https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample_bilinear.html | pytorch docs |
torch.reciprocal
torch.reciprocal(input, *, out=None) -> Tensor
Returns a new tensor with the reciprocal of the elements of "input"
\text{out}_{i} = \frac{1}{\text{input}_{i}}
Note:
Unlike NumPy's reciprocal, torch.reciprocal supports integral
inputs. Integral inputs to reciprocal are automatically promoted
to the default scalar type.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.4595, -2.1219, -1.4314, 0.7298])
>>> torch.reciprocal(a)
tensor([-2.1763, -0.4713, -0.6986, 1.3702])
| https://pytorch.org/docs/stable/generated/torch.reciprocal.html | pytorch docs |
torch.cuda.reset_max_memory_cached
torch.cuda.reset_max_memory_cached(device=None)
Resets the starting point in tracking maximum GPU memory managed by
the caching allocator for a given device.
See "max_memory_cached()" for details.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Warning:
This function now calls "reset_peak_memory_stats()", which resets
/all/ peak memory stats.
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.reset_max_memory_cached.html | pytorch docs |
torch.Tensor.lgamma
Tensor.lgamma() -> Tensor
See "torch.lgamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.lgamma.html | pytorch docs |
SparseAdam
class torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, maximize=False)
Implements lazy version of Adam algorithm suitable for sparse
tensors.
In this variant, only moments that show up in the gradient get
updated, and only those portions of the gradient get applied to the
parameters.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
1e-3)
* **betas** (*Tuple**[**float**, **float**]**, **optional*) --
coefficients used for computing running averages of gradient
and its square (default: (0.9, 0.999))
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **maximize** (*bool**, **optional*) -- maximize the params
based on the objective, instead of minimizing (default: False)
| https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html | pytorch docs |
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
| https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html | pytorch docs |
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict() | https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html | pytorch docs |
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
step(closure=None)
Performs a single optimization step.
Parameters:
**closure** (*Callable**, **optional*) -- A closure that
reevaluates the model and returns the loss.
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
| https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html | pytorch docs |
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html | pytorch docs |
torch.Tensor.addbmm
Tensor.addbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor
See "torch.addbmm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.addbmm.html | pytorch docs |