text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.is_conj
torch.is_conj(input)
Returns True if the "input" is a conjugated tensor, i.e. its
conjugate bit is set to True.
Parameters:
input (Tensor) -- the input tensor. | https://pytorch.org/docs/stable/generated/torch.is_conj.html | pytorch docs |
torch.log
torch.log(input, *, out=None) -> Tensor
Returns a new tensor with the natural logarithm of the elements of
"input".
y_{i} = \log_{e} (x_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.rand(5) * 5
>>> a
tensor([4.7767, 4.3234, 1.2156, 0.2411, 4.5739])
>>> torch.log(a)
tensor([ 1.5637, 1.4640, 0.1952, -1.4226, 1.5204])
| https://pytorch.org/docs/stable/generated/torch.log.html | pytorch docs |
torch.nn.functional.alpha_dropout
torch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False)
Applies alpha dropout to the input.
See "AlphaDropout" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.alpha_dropout.html | pytorch docs |
torch.count_nonzero
torch.count_nonzero(input, dim=None) -> Tensor
Counts the number of non-zero values in the tensor "input" along
the given "dim". If no dim is specified then all non-zeros in the
tensor are counted.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- Dim or
tuple of dims along which to count non-zeros.
Example:
>>> x = torch.zeros(3,3)
>>> x[torch.randn(3,3) > 0.5] = 1
>>> x
tensor([[0., 1., 1.],
[0., 0., 0.],
[0., 0., 1.]])
>>> torch.count_nonzero(x)
tensor(3)
>>> torch.count_nonzero(x, dim=0)
tensor([0, 1, 2])
| https://pytorch.org/docs/stable/generated/torch.count_nonzero.html | pytorch docs |
NAdam
class torch.optim.NAdam(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, momentum_decay=0.004, *, foreach=None, differentiable=False)
Implements NAdam algorithm.
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \gamma_t \text{ (lr)}, \:
\beta_1,\beta_2 \text{ (betas)}, \: \theta_0 \text{
(params)}, \: f(\theta) \text{ (objective)} \\
&\hspace{13mm} \: \lambda \text{ (weight decay)}, \:\psi \text{
(momentum decay)} \\ &\textbf{initialize} : m_0
\leftarrow 0 \text{ ( first moment)}, v_0 \leftarrow 0
\text{ ( second moment)}
\\[-1.ex] &\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\ &\hspace{5mm}g_t
\leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\
&\hspace{5mm}if \: \lambda \neq 0
| https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
&\hspace{5mm}if \: \lambda \neq 0
\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1}
\ &\hspace{5mm} \mu_t \leftarrow \beta_1 \big(1 -
\frac{1}{2} 0.96^{t \psi} \big) \ &\hspace{5mm}
\mu_{t+1} \leftarrow \beta_1 \big(1 - \frac{1}{2}
0.96^{(t+1)\psi}\big)\ &\hspace{5mm}m_t
\leftarrow \beta_1 m_{t-1} + (1 - \beta_1) g_t \
&\hspace{5mm}v_t \leftarrow \beta_2 v_{t-1} +
(1-\beta_2) g^2_t \ &\hspace{5mm}\widehat{m_t}
\leftarrow \mu_{t+1} m_t/(1-\prod_{i=1}^{t+1}\mu_i)\[-1.ex]
& \hspace{11mm} + (1-\mu_t) g_t /(1-\prod_{i=1}^{t} \mu_{i})
\ &\hspace{5mm}\widehat{v_t} \leftarrow
v_t/\big(1-\beta_2^t \big) \
&\hspace{5mm}\theta_t \leftarrow \theta_{t-1} - \gamma
\widehat{m_t}/ \big(\sqrt{\widehat{v_t}} + \epsilon
\big) \
&\rule{110mm}{0.4pt} | https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
&\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to
Incorporating Nesterov Momentum into Adam.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
2e-3)
* **betas** (*Tuple**[**float**, **float**]**, **optional*) --
coefficients used for computing running averages of gradient
and its square (default: (0.9, 0.999))
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **weight_decay** (*float**, **optional*) -- weight decay (L2
penalty) (default: 0)
* **momentum_decay** (*float**, **optional*) -- momentum
momentum_decay (default: 4e-3)
| https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
momentum_decay (default: 4e-3)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used. If unspecified by the
user (so foreach is None), we will try to use foreach over the
for-loop implementation on CUDA, since it is usually
significantly more performant. (default: None)
* **differentiable** (*bool**, **optional*) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook) | https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
| https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
| https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html | pytorch docs |
torch.Tensor.bitwise_or
Tensor.bitwise_or() -> Tensor
See "torch.bitwise_or()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_or.html | pytorch docs |
torch.Tensor.floor_divide
Tensor.floor_divide(value) -> Tensor
See "torch.floor_divide()" | https://pytorch.org/docs/stable/generated/torch.Tensor.floor_divide.html | pytorch docs |
torch.Tensor.all
Tensor.all(dim=None, keepdim=False) -> Tensor
See "torch.all()" | https://pytorch.org/docs/stable/generated/torch.Tensor.all.html | pytorch docs |
torch.rad2deg
torch.rad2deg(input, *, out=None) -> Tensor
Returns a new tensor with each of the elements of "input" converted
from angles in radians to degrees.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([[3.142, -3.142], [6.283, -6.283], [1.570, -1.570]])
>>> torch.rad2deg(a)
tensor([[ 180.0233, -180.0233],
[ 359.9894, -359.9894],
[ 89.9544, -89.9544]])
| https://pytorch.org/docs/stable/generated/torch.rad2deg.html | pytorch docs |
LSTM
class torch.ao.nn.quantized.dynamic.LSTM(args, *kwargs)
A dynamic quantized LSTM module with floating point tensor as
inputs and outputs. We adopt the same interface as torch.nn.LSTM,
please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM
for documentation.
Examples:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.LSTM.html | pytorch docs |
torch.can_cast
torch.can_cast(from, to) -> bool
Determines if a type conversion is allowed under PyTorch casting
rules described in the type promotion documentation.
Parameters:
* from (dtype) -- The original "torch.dtype".
* **to** (*dtype*) -- The target "torch.dtype".
Example:
>>> torch.can_cast(torch.double, torch.float)
True
>>> torch.can_cast(torch.float, torch.int)
False
| https://pytorch.org/docs/stable/generated/torch.can_cast.html | pytorch docs |
torch.autograd.function.FunctionCtx.set_materialize_grads
FunctionCtx.set_materialize_grads(value)
Sets whether to materialize grad tensors. Default is "True".
This should be called only from inside the "forward()"
method
If "True", undefined grad tensors will be expanded to tensors full
of zeros prior to calling the "backward()" and "jvp()" methods.
Example::
>>> class SimpleFunc(Function):
>>> @staticmethod
>>> def forward(ctx, x):
>>> return x.clone(), x.clone()
>>>
>>> @staticmethod
>>> @once_differentiable
>>> def backward(ctx, g1, g2):
>>> return g1 + g2 # No check for None necessary
>>>
>>> # We modify SimpleFunc to handle non-materialized grad outputs
>>> class Func(Function):
>>> @staticmethod
>>> def forward(ctx, x):
>>> ctx.set_materialize_grads(False) | https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html | pytorch docs |
ctx.set_materialize_grads(False)
>>> ctx.save_for_backward(x)
>>> return x.clone(), x.clone()
>>>
>>> @staticmethod
>>> @once_differentiable
>>> def backward(ctx, g1, g2):
>>> x, = ctx.saved_tensors
>>> grad_input = torch.zeros_like(x)
>>> if g1 is not None: # We must check for None now
>>> grad_input += g1
>>> if g2 is not None:
>>> grad_input += g2
>>> return grad_input
>>>
>>> a = torch.tensor(1., requires_grad=True)
>>> b, _ = Func.apply(a) # induces g2 to be undefined
| https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html | pytorch docs |
torch.Tensor.unique_consecutive
Tensor.unique_consecutive(return_inverse=False, return_counts=False, dim=None)
Eliminates all but the first element from every consecutive group
of equivalent elements.
See "torch.unique_consecutive()" | https://pytorch.org/docs/stable/generated/torch.Tensor.unique_consecutive.html | pytorch docs |
torch.foreach_neg
torch.foreach_neg(self: List[Tensor]) -> None
Apply "torch.neg()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_neg_.html | pytorch docs |
torch.bmm
torch.bmm(input, mat2, *, out=None) -> Tensor
Performs a batch matrix-matrix product of matrices stored in
"input" and "mat2".
"input" and "mat2" must be 3-D tensors each containing the same
number of matrices.
If "input" is a (b \times n \times m) tensor, "mat2" is a (b \times
m \times p) tensor, "out" will be a (b \times n \times p) tensor.
\text{out}_i = \text{input}_i \mathbin{@} \text{mat2}_i
This operator supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Note:
This function does not broadcast. For broadcasting matrix
products, see "torch.matmul()".
Parameters:
* input (Tensor) -- the first batch of matrices to be
multiplied
* **mat2** (*Tensor*) -- the second batch of matrices to be
multiplied
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example: | https://pytorch.org/docs/stable/generated/torch.bmm.html | pytorch docs |
Example:
>>> input = torch.randn(10, 3, 4)
>>> mat2 = torch.randn(10, 4, 5)
>>> res = torch.bmm(input, mat2)
>>> res.size()
torch.Size([10, 3, 5])
| https://pytorch.org/docs/stable/generated/torch.bmm.html | pytorch docs |
torch.cuda.memory_stats
torch.cuda.memory_stats(device=None)
Returns a dictionary of CUDA memory allocator statistics for a
given device.
The return value of this function is a dictionary of statistics,
each of which is a non-negative integer.
Core statistics:
""allocated.{all,large_pool,small_pool}.{current,peak,allocated,
freed}"": number of allocation requests received by the memory
allocator.
""allocated_bytes.{all,large_pool,small_pool}.{current,peak,allo
cated,freed}"": amount of allocated memory.
""segment.{all,large_pool,small_pool}.{current,peak,allocated,fr
eed}"": number of reserved segments from "cudaMalloc()".
""reserved_bytes.{all,large_pool,small_pool}.{current,peak,alloc
ated,freed}"": amount of reserved memory.
""active.{all,large_pool,small_pool}.{current,peak,allocated,fre
ed}"": number of active memory blocks.
""active_bytes.{all,large_pool,small_pool}.{current,peak,allocat
| https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html | pytorch docs |
ed,freed}"": amount of active memory.
""inactive_split.{all,large_pool,small_pool}.{current,peak,alloc
ated,freed}"": number of inactive, non-releasable memory blocks.
""inactive_split_bytes.{all,large_pool,small_pool}.{current,peak
,allocated,freed}"": amount of inactive, non-releasable memory.
For these core statistics, values are broken down as follows.
Pool type:
"all": combined statistics across all memory pools.
"large_pool": statistics for the large allocation pool (as of
October 2019, for size >= 1MB allocations).
"small_pool": statistics for the small allocation pool (as of
October 2019, for size < 1MB allocations).
Metric type:
"current": current value of this metric.
"peak": maximum value of this metric.
"allocated": historical total increase in this metric.
"freed": historical total decrease in this metric.
In addition to the core statistics, we also provide some simple
event counters: | https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html | pytorch docs |
event counters:
""num_alloc_retries"": number of failed "cudaMalloc" calls that
result in a cache flush and retry.
""num_ooms"": number of out-of-memory errors thrown.
The caching allocator can be configured via ENV to not split blocks
larger than a defined size (see Memory Management section of the
Cuda Semantics documentation). This helps avoid memory
fragmentation but may have a performance penalty. Additional
outputs to assist with tuning and evaluating impact:
""max_split_size"": blocks above this size will not be split.
""oversize_allocations.{current,peak,allocated,freed}"": number
of over-size allocation requests received by the memory
allocator.
""oversize_segments.{current,peak,allocated,freed}"": number of
over-size reserved segments from "cudaMalloc()".
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistics for the current device, given by | https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html | pytorch docs |
"current_device()", if "device" is "None" (default).
Return type:
Dict[str, Any]
Note:
See Memory management for more details about GPU memory
management.
Note:
With backend:cudaMallocAsync, some stats are not meaningful, and
are always reported as zero.
| https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html | pytorch docs |
BatchNorm2d
class torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
Applies Batch Normalization over a 4D input (a mini-batch of 2D
inputs with additional channel dimension) as described in the paper
Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The mean and standard-deviation are calculated per-dimension over
the mini-batches and \gamma and \beta are learnable parameter
vectors of size C (where C is the input size). By default, the
elements of \gamma are set to 1 and the elements of \beta are set
to 0. The standard-deviation is calculated via the biased
estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates
of its computed mean and variance, which are then used for | https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html | pytorch docs |
normalization during evaluation. The running estimates are kept
with a default "momentum" of 0.1.
If "track_running_stats" is set to "False", this layer then does
not keep running estimates, and batch statistics are instead used
during evaluation time as well.
Note:
This "momentum" argument is different from one used in optimizer
classes and the conventional notion of momentum. Mathematically,
the update rule for running statistics here is \hat{x}_\text{new}
= (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times
x_t, where \hat{x} is the estimated statistic and x_t is the new
observed value.
Because the Batch Normalization is done over the C dimension,
computing statistics on (N, H, W) slices, it's common terminology
to call this Spatial Batch Normalization.
Parameters:
* num_features (int) -- C from an expected input of size
(N, C, H, W)
* **eps** (*float*) -- a value added to the denominator for
| https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html | pytorch docs |
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Can be set to "None" for
cumulative moving average (i.e. simple average). Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters. Default:
"True"
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics, and initializes statistics buffers
"running_mean" and "running_var" as "None". When these buffers
are "None", this module always uses batch statistics. in both
training and eval modes. Default: "True"
Shape:
* Input: (N, C, H, W)
* Output: (N, C, H, W) (same shape as input)
Examples:
>>> # With Learnable Parameters
| https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html | pytorch docs |
With Learnable Parameters
>>> m = nn.BatchNorm2d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm2d(100, affine=False)
>>> input = torch.randn(20, 100, 35, 45)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html | pytorch docs |
torch.triangular_solve
torch.triangular_solve(b, A, upper=True, transpose=False, unitriangular=False, *, out=None)
Solves a system of equations with a square upper or lower
triangular invertible matrix A and multiple right-hand sides b.
In symbols, it solves AX = b and assumes A is square upper-
triangular (or lower-triangular if "upper"= False) and does not
have zeros on the diagonal.
torch.triangular_solve(b, A) can take in 2D inputs b, A or
inputs that are batches of 2D matrices. If the inputs are batches,
then returns batched outputs X
If the diagonal of "A" contains zeros or elements that are very
close to zero and "unitriangular"= False (default) or if the
input matrix is badly conditioned, the result may contain NaN s.
Supports input of float, double, cfloat and cdouble data types.
Warning:
"torch.triangular_solve()" is deprecated in favor of
"torch.linalg.solve_triangular()" and will be removed in a future
| https://pytorch.org/docs/stable/generated/torch.triangular_solve.html | pytorch docs |
PyTorch release. "torch.linalg.solve_triangular()" has its
arguments reversed and does not return a copy of one of the
inputs."X = torch.triangular_solve(B, A).solution" should be
replaced with
X = torch.linalg.solve_triangular(A, B)
Parameters:
* b (Tensor) -- multiple right-hand sides of size (*, m,
k) where * is zero of more batch dimensions
* **A** (*Tensor*) -- the input triangular coefficient matrix of
size (*, m, m) where * is zero or more batch dimensions
* **upper** (*bool**, **optional*) -- whether A is upper or
lower triangular. Default: "True".
* **transpose** (*bool**, **optional*) -- solves *op(A)X = b*
where *op(A) = A^T* if this flag is "True", and *op(A) = A* if
it is "False". Default: "False".
* **unitriangular** (*bool**, **optional*) -- whether A is unit
triangular. If True, the diagonal elements of A are assumed to
| https://pytorch.org/docs/stable/generated/torch.triangular_solve.html | pytorch docs |
be 1 and not referenced from A. Default: "False".
Keyword Arguments:
out ((Tensor, Tensor), optional) -- tuple of
two tensors to write the output to. Ignored if None. Default:
None.
Returns:
A namedtuple (solution, cloned_coefficient) where
cloned_coefficient is a clone of A and solution is the
solution X to AX = b (or whatever variant of the system of
equations, depending on the keyword arguments.)
Examples:
>>> A = torch.randn(2, 2).triu()
>>> A
tensor([[ 1.1527, -1.0753],
[ 0.0000, 0.7986]])
>>> b = torch.randn(2, 3)
>>> b
tensor([[-0.0210, 2.3513, -1.5492],
[ 1.5429, 0.7403, -1.0243]])
>>> torch.triangular_solve(b, A)
torch.return_types.triangular_solve(
solution=tensor([[ 1.7841, 2.9046, -2.5405],
[ 1.9320, 0.9270, -1.2826]]),
cloned_coefficient=tensor([[ 1.1527, -1.0753],
[ 0.0000, 0.7986]]))
| https://pytorch.org/docs/stable/generated/torch.triangular_solve.html | pytorch docs |
torch.frombuffer
torch.frombuffer(buffer, *, dtype, count=- 1, offset=0, requires_grad=False) -> Tensor
Creates a 1-dimensional "Tensor" from an object that implements the
Python buffer protocol.
Skips the first "offset" bytes in the buffer, and interprets the
rest of the raw bytes as a 1-dimensional tensor of type "dtype"
with "count" elements.
Note that either of the following must be true:
"count" is a positive non-zero number, and the total number of
bytes in the buffer is less than "offset" plus "count" times the
size (in bytes) of "dtype".
"count" is negative, and the length (number of bytes) of the
buffer subtracted by the "offset" is a multiple of the size (in
bytes) of "dtype".
The returned tensor and buffer share the same memory. Modifications
to the tensor will be reflected in the buffer and vice versa. The
returned tensor is not resizable.
Note:
This function increments the reference count for the object that
| https://pytorch.org/docs/stable/generated/torch.frombuffer.html | pytorch docs |
owns the shared memory. Therefore, such memory will not be
deallocated before the returned tensor goes out of scope.
Warning:
This function's behavior is undefined when passed an object
implementing the buffer protocol whose data is not on the CPU.
Doing so is likely to cause a segmentation fault.
Warning:
This function does not try to infer the "dtype" (hence, it is not
optional). Passing a different "dtype" than its source may result
in unexpected behavior.
Parameters:
buffer (object) -- a Python object that exposes the buffer
interface.
Keyword Arguments:
* dtype ("torch.dtype") -- the desired data type of returned
tensor.
* **count** (*int**, **optional*) -- the number of desired
elements to be read. If negative, all the elements (until the
end of the buffer) will be read. Default: -1.
* **offset** (*int**, **optional*) -- the number of bytes to
| https://pytorch.org/docs/stable/generated/torch.frombuffer.html | pytorch docs |
skip at the start of the buffer. Default: 0.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> import array
>>> a = array.array('i', [1, 2, 3])
>>> t = torch.frombuffer(a, dtype=torch.int32)
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
>>> # Interprets the signed char bytes as 32-bit integers.
>>> # Each 4 signed char elements will be interpreted as
>>> # 1 signed 32-bit integer.
>>> import array
>>> a = array.array('b', [-1, 0, 0, 0])
>>> torch.frombuffer(a, dtype=torch.int32)
tensor([255], dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.frombuffer.html | pytorch docs |
StandaloneModuleConfigEntry
class torch.ao.quantization.fx.custom_config.StandaloneModuleConfigEntry(qconfig_mapping: 'Optional[QConfigMapping]', example_inputs: 'Tuple[Any, ...]', prepare_custom_config: 'Optional[PrepareCustomConfig]', backend_config: 'Optional[BackendConfig]') | https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.StandaloneModuleConfigEntry.html | pytorch docs |
torch.foreach_erf
torch.foreach_erf(self: List[Tensor]) -> None
Apply "torch.erf()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_erf_.html | pytorch docs |
torch.Tensor.det
Tensor.det() -> Tensor
See "torch.det()" | https://pytorch.org/docs/stable/generated/torch.Tensor.det.html | pytorch docs |
torch.autograd.Function.forward
static Function.forward(ctx, args, *kwargs)
This function is to be overridden by all subclasses. There are two
ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
pass
It must accept a context ctx as the first argument, followed by
any number of arguments (tensors or other types).
See Combined or separate forward() and setup_context() for more
details
Usage 2 (Separate forward and ctx):
@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
pass
@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
"torch.autograd.Function.setup_context()" staticmethod to handle
| https://pytorch.org/docs/stable/generated/torch.autograd.Function.forward.html | pytorch docs |
setting up the "ctx" object. "output" is the output of the
forward, "inputs" are a Tuple of inputs to the forward.
See Extending torch.autograd for more details
The context can be used to store arbitrary data that can be then
retrieved during the backward pass. Tensors should not be stored
directly on ctx (though this is not currently enforced for
backward compatibility). Instead, tensors should be saved either
with "ctx.save_for_backward()" if they are intended to be used in
"backward" (equivalently, "vjp") or "ctx.save_for_forward()" if
they are intended to be used for in "jvp".
Return type:
Any | https://pytorch.org/docs/stable/generated/torch.autograd.Function.forward.html | pytorch docs |
torch.nn.functional.max_unpool3d
torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None)
Computes a partial inverse of "MaxPool3d".
See "MaxUnpool3d" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool3d.html | pytorch docs |
torch.Tensor.add_
Tensor.add_(other, *, alpha=1) -> Tensor
In-place version of "add()" | https://pytorch.org/docs/stable/generated/torch.Tensor.add_.html | pytorch docs |
PrepareCustomConfig
class torch.ao.quantization.fx.custom_config.PrepareCustomConfig
Custom configuration for "prepare_fx()" and "prepare_qat_fx()".
Example usage:
prepare_custom_config = PrepareCustomConfig() .set_standalone_module_name("module1", qconfig_mapping, example_inputs, child_prepare_custom_config, backend_config) .set_standalone_module_class(MyStandaloneModule, qconfig_mapping, example_inputs, child_prepare_custom_config, backend_config) .set_float_to_observed_mapping(FloatCustomModule, ObservedCustomModule) .set_non_traceable_module_names(["module2", "module3"]) .set_non_traceable_module_classes([NonTraceableModule1, NonTraceableModule2]) .set_input_quantized_indexes([0]) .set_output_quantized_indexes([0]) .set_preserved_attributes(["attr1", "attr2"])
classmethod from_dict(prepare_custom_config_dict) | https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html | pytorch docs |
Create a "PrepareCustomConfig" from a dictionary with the
following items:
"standalone_module_name": a list of (module_name,
qconfig_mapping, example_inputs, child_prepare_custom_config,
backend_config) tuples
"standalone_module_class" a list of (module_class,
qconfig_mapping, example_inputs, child_prepare_custom_config,
backend_config) tuples
"float_to_observed_custom_module_class": a nested dictionary
mapping from quantization mode to an inner mapping from float
module classes to observed module classes, e.g. {"static":
{FloatCustomModule: ObservedCustomModule}}
"non_traceable_module_name": a list of modules names that are
not symbolically traceable "non_traceable_module_class": a
list of module classes that are not symbolically traceable
"input_quantized_idxs": a list of indexes of graph inputs
that should be quantized "output_quantized_idxs": a list of
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html | pytorch docs |
indexes of graph outputs that should be quantized
"preserved_attributes": a list of attributes that persist
even if they are not used in "forward"
This function is primarily for backward compatibility and may be
removed in the future.
Return type:
*PrepareCustomConfig*
set_float_to_observed_mapping(float_class, observed_class, quant_type=QuantType.STATIC)
Set the mapping from a custom float module class to a custom
observed module class.
The observed module class must have a "from_float" class method
that converts the float module class to the observed module
class. This is currently only supported for static quantization.
Return type:
*PrepareCustomConfig*
set_input_quantized_indexes(indexes)
Set the indexes of the inputs of the graph that should be
quantized. Inputs are otherwise assumed to be in fp32 by default
instead.
Return type:
*PrepareCustomConfig*
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html | pytorch docs |
Return type:
PrepareCustomConfig
set_non_traceable_module_classes(module_classes)
Set the modules that are not symbolically traceable, identified
by class.
Return type:
*PrepareCustomConfig*
set_non_traceable_module_names(module_names)
Set the modules that are not symbolically traceable, identified
by name.
Return type:
*PrepareCustomConfig*
set_output_quantized_indexes(indexes)
Set the indexes of the outputs of the graph that should be
quantized. Outputs are otherwise assumed to be in fp32 by
default instead.
Return type:
*PrepareCustomConfig*
set_preserved_attributes(attributes)
Set the names of the attributes that will persist in the graph
module even if they are not used in the model's "forward"
method.
Return type:
*PrepareCustomConfig*
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html | pytorch docs |
Return type:
PrepareCustomConfig
set_standalone_module_class(module_class, qconfig_mapping, example_inputs, prepare_custom_config, backend_config)
Set the configuration for running a standalone module identified
by "module_class".
If "qconfig_mapping" is None, the parent "qconfig_mapping" will
be used instead. If "prepare_custom_config" is None, an empty
"PrepareCustomConfig" will be used. If "backend_config" is None,
the parent "backend_config" will be used instead.
Return type:
*PrepareCustomConfig*
set_standalone_module_name(module_name, qconfig_mapping, example_inputs, prepare_custom_config, backend_config)
Set the configuration for running a standalone module identified
by "module_name".
If "qconfig_mapping" is None, the parent "qconfig_mapping" will
be used instead. If "prepare_custom_config" is None, an empty
"PrepareCustomConfig" will be used. If "backend_config" is None,
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html | pytorch docs |
the parent "backend_config" will be used instead.
Return type:
*PrepareCustomConfig*
to_dict()
Convert this "PrepareCustomConfig" to a dictionary with the
items described in "from_dict()".
Return type:
*Dict*[str, *Any*]
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html | pytorch docs |
torch.renorm
torch.renorm(input, p, dim, maxnorm, *, out=None) -> Tensor
Returns a tensor where each sub-tensor of "input" along dimension
"dim" is normalized such that the p-norm of the sub-tensor is
lower than the value "maxnorm"
Note:
If the norm of a row is lower than *maxnorm*, the row is
unchanged
Parameters:
* input (Tensor) -- the input tensor.
* **p** (*float*) -- the power for the norm computation
* **dim** (*int*) -- the dimension to slice over to get the sub-
tensors
* **maxnorm** (*float*) -- the maximum norm to keep each sub-
tensor under
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.ones(3, 3)
>>> x[1].fill_(2)
tensor([ 2., 2., 2.])
>>> x[2].fill_(3)
tensor([ 3., 3., 3.])
>>> x
tensor([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
| https://pytorch.org/docs/stable/generated/torch.renorm.html | pytorch docs |
[ 3., 3., 3.]])
>>> torch.renorm(x, 1, 0, 5)
tensor([[ 1.0000, 1.0000, 1.0000],
[ 1.6667, 1.6667, 1.6667],
[ 1.6667, 1.6667, 1.6667]]) | https://pytorch.org/docs/stable/generated/torch.renorm.html | pytorch docs |
torch._foreach_cos
torch._foreach_cos(self: List[Tensor]) -> List[Tensor]
Apply "torch.cos()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_cos.html | pytorch docs |
torch.Tensor.numel
Tensor.numel() -> int
See "torch.numel()" | https://pytorch.org/docs/stable/generated/torch.Tensor.numel.html | pytorch docs |
CosineSimilarity
class torch.nn.CosineSimilarity(dim=1, eps=1e-08)
Returns cosine similarity between x_1 and x_2, computed along
dim.
\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert
_2 \cdot \Vert x_2 \Vert _2, \epsilon)}.
Parameters:
* dim (int, optional) -- Dimension where cosine
similarity is computed. Default: 1
* **eps** (*float**, **optional*) -- Small value to avoid
division by zero. Default: 1e-8
Shape:
* Input1: (\ast_1, D, \ast_2) where D is at position dim
* Input2: (\ast_1, D, \ast_2), same number of dimensions as x1,
matching x1 size at dimension *dim*,
and broadcastable with x1 at other dimensions.
* Output: (\ast_1, \ast_2)
Examples::
>>> input1 = torch.randn(100, 128)
>>> input2 = torch.randn(100, 128)
>>> cos = nn.CosineSimilarity(dim=1, eps=1e-6)
>>> output = cos(input1, input2) | https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html | pytorch docs |
torch.Tensor.cosh_
Tensor.cosh_() -> Tensor
In-place version of "cosh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cosh_.html | pytorch docs |
torch.tensor
torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor
Constructs a tensor with no autograd history (also known as a "leaf
tensor", see Autograd mechanics) by copying "data".
Warning:
When working with tensors prefer using "torch.Tensor.clone()",
"torch.Tensor.detach()", and "torch.Tensor.requires_grad_()" for
readability. Letting *t* be a tensor, "torch.tensor(t)" is
equivalent to "t.clone().detach()", and "torch.tensor(t,
requires_grad=True)" is equivalent to
"t.clone().detach().requires_grad_(True)".
See also:
"torch.as_tensor()" preserves autograd history and avoids copies
where possible. "torch.from_numpy()" creates a tensor that shares
storage with a NumPy array.
Parameters:
data (array_like) -- Initial data for the tensor. Can be a
list, tuple, NumPy "ndarray", scalar, and other types.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.tensor.html | pytorch docs |
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", infers data type from
"data".
* **device** ("torch.device", optional) -- the device of the
constructed tensor. If None and data is a tensor then the
device of data is used. If None and data is not a tensor then
the result tensor is constructed on the CPU.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000, 1.2000],
[ 2.2000, 3.1000],
[ 4.9000, 5.2000]])
>>> torch.tensor([0, 1]) # Type inference on data
tensor([ 0, 1])
| https://pytorch.org/docs/stable/generated/torch.tensor.html | pytorch docs |
tensor([ 0, 1])
>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
... dtype=torch.float64,
... device=torch.device('cuda:0')) # creates a double tensor on a CUDA device
tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')
>>> torch.tensor(3.14159) # Create a zero-dimensional (scalar) tensor
tensor(3.1416)
>>> torch.tensor([]) # Create an empty tensor (of size (0,))
tensor([])
| https://pytorch.org/docs/stable/generated/torch.tensor.html | pytorch docs |
Fold
class torch.nn.Fold(output_size, kernel_size, dilation=1, padding=0, stride=1)
Combines an array of sliding local blocks into a large containing
tensor.
Consider a batched "input" tensor containing sliding local blocks,
e.g., patches of images, of shape (N, C \times
\prod(\text{kernel_size}), L), where N is batch dimension, C
\times \prod(\text{kernel_size}) is the number of values within a
block (a block has \prod(\text{kernel_size}) spatial locations
each containing a C-channeled vector), and L is the total number of
blocks. (This is exactly the same specification as the output shape
of "Unfold".) This operation combines these local blocks into the
large "output" tensor of shape (N, C, \text{output_size}[0],
\text{output_size}[1], \dots) by summing the overlapping values.
Similar to "Unfold", the arguments must satisfy
L = \prod_d \left\lfloor\frac{\text{output\_size}[d] + 2 \times
\text{padding}[d] % - \text{dilation}[d] \times
| https://pytorch.org/docs/stable/generated/torch.nn.Fold.html | pytorch docs |
(\text{kernel_size}[d] - 1) - 1}{\text{stride}[d]} +
1\right\rfloor,
where d is over all spatial dimensions.
"output_size" describes the spatial shape of the large containing
tensor of the sliding local blocks. It is useful to resolve the
ambiguity when multiple input shapes map to same number of
sliding blocks, e.g., with "stride > 0".
The "padding", "stride" and "dilation" arguments specify how the
sliding blocks are retrieved.
"stride" controls the stride for the sliding blocks.
"padding" controls the amount of implicit zero-paddings on both
sides for "padding" number of points for each dimension before
reshaping.
"dilation" controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but
this link has a nice visualization of what "dilation" does.
Parameters:
* output_size (int or tuple) -- the shape of the | https://pytorch.org/docs/stable/generated/torch.nn.Fold.html | pytorch docs |
spatial dimensions of the output (i.e., "output.sizes()[2:]")
* **kernel_size** (*int** or **tuple*) -- the size of the
sliding blocks
* **dilation** (*int** or **tuple**, **optional*) -- a parameter
that controls the stride of elements within the neighborhood.
Default: 1
* **padding** (*int** or **tuple**, **optional*) -- implicit
zero padding to be added on both sides of input. Default: 0
* **stride** (*int** or **tuple*) -- the stride of the sliding
blocks in the input spatial dimensions. Default: 1
If "output_size", "kernel_size", "dilation", "padding" or
"stride" is an int or a tuple of length 1 then their values will
be replicated across all spatial dimensions.
For the case of two output spatial dimensions this operation is
sometimes called "col2im".
Note:
"Fold" calculates each combined value in the resulting large
tensor by summing all values from all containing blocks. "Unfold"
| https://pytorch.org/docs/stable/generated/torch.nn.Fold.html | pytorch docs |
extracts the values in the local blocks by copying from the large
tensor. So, if the blocks overlap, they are not inverses of each
other.In general, folding and unfolding operations are related as
follows. Consider "Fold" and "Unfold" instances created with the
same parameters:
>>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)
>>> fold = nn.Fold(output_size=..., **fold_params)
>>> unfold = nn.Unfold(**fold_params)
Then for any (supported) "input" tensor the following equality
holds:
fold(unfold(input)) == divisor * input
where "divisor" is a tensor that depends only on the shape and
dtype of the "input":
>>> input_ones = torch.ones(input.shape, dtype=input.dtype)
>>> divisor = fold(unfold(input_ones))
When the "divisor" tensor contains no zero elements, then "fold"
and "unfold" operations are inverses of each other (up to
constant divisor).
Warning: | https://pytorch.org/docs/stable/generated/torch.nn.Fold.html | pytorch docs |
constant divisor).
Warning:
Currently, only unbatched (3D) or batched (4D) image-like output
tensors are supported.
Shape:
* Input: (N, C \times \prod(\text{kernel_size}), L) or (C
\times \prod(\text{kernel_size}), L)
* Output: (N, C, \text{output\_size}[0], \text{output\_size}[1],
\dots) or (C, \text{output\_size}[0], \text{output\_size}[1],
\dots) as described above
Examples:
>>> fold = nn.Fold(output_size=(4, 5), kernel_size=(2, 2))
>>> input = torch.randn(1, 3 * 2 * 2, 12)
>>> output = fold(input)
>>> output.size()
torch.Size([1, 3, 4, 5])
| https://pytorch.org/docs/stable/generated/torch.nn.Fold.html | pytorch docs |
torch.nanmedian
torch.nanmedian(input) -> Tensor
Returns the median of the values in "input", ignoring "NaN" values.
This function is identical to "torch.median()" when there are no
"NaN" values in "input". When "input" has one or more "NaN" values,
"torch.median()" will always return "NaN", while this function will
return the median of the non-"NaN" elements in "input". If all the
elements in "input" are "NaN" it will also return "NaN".
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> a = torch.tensor([1, float('nan'), 3, 2])
>>> a.median()
tensor(nan)
>>> a.nanmedian()
tensor(2.)
torch.nanmedian(input, dim=- 1, keepdim=False, *, out=None)
Returns a namedtuple "(values, indices)" where "values" contains
the median of each row of "input" in the dimension "dim", ignoring
"NaN" values, and "indices" contains the index of the median values
found in the dimension "dim". | https://pytorch.org/docs/stable/generated/torch.nanmedian.html | pytorch docs |
found in the dimension "dim".
This function is identical to "torch.median()" when there are no
"NaN" values in a reduced row. When a reduced row has one or more
"NaN" values, "torch.median()" will always reduce it to "NaN",
while this function will reduce it to the median of the non-"NaN"
elements. If all the elements in a reduced row are "NaN" then it
will be reduced to "NaN", too.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
out ((Tensor, Tensor), optional) -- The first
tensor will be populated with the median values and the second
tensor, which must have dtype long, with their indices in the
dimension "dim" of "input".
Example:
>>> a = torch.tensor([[2, 3, 1], [float('nan'), 1, float('nan')]])
>>> a
tensor([[2., 3., 1.],
| https://pytorch.org/docs/stable/generated/torch.nanmedian.html | pytorch docs |
a
tensor([[2., 3., 1.],
[nan, 1., nan]])
>>> a.median(0)
torch.return_types.median(values=tensor([nan, 1., nan]), indices=tensor([1, 1, 1]))
>>> a.nanmedian(0)
torch.return_types.nanmedian(values=tensor([2., 1., 1.]), indices=tensor([0, 1, 0]))
| https://pytorch.org/docs/stable/generated/torch.nanmedian.html | pytorch docs |
EmbeddingBag
class torch.ao.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='sum', sparse=False, _weight=None, include_last_offset=False, dtype=torch.quint8)
A quantized EmbeddingBag module with quantized packed weights as
inputs. We adopt the same interface as torch.nn.EmbeddingBag,
please see
https://pytorch.org/docs/stable/nn.html#torch.nn.EmbeddingBag for
documentation.
Similar to "EmbeddingBag", attributes will be randomly initialized
at module creation time and will be overwritten later
Variables:
weight (Tensor) -- the non-learnable quantized weights of
the module of shape (\text{num_embeddings},
\text{embedding_dim}).
Examples::
>>> m = nn.quantized.EmbeddingBag(num_embeddings=10, embedding_dim=12, include_last_offset=True, mode='sum') | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.EmbeddingBag.html | pytorch docs |
indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8, 6, 6, 9, 1, 6, 8, 8, 3, 2, 3, 6, 3, 6, 5, 7, 0, 8, 4, 6, 5, 8, 2, 3])
>>> offsets = torch.tensor([0, 19, 20, 28, 28, 32])
>>> output = m(indices, offsets)
>>> print(output.size())
torch.Size([5, 12])
classmethod from_float(mod)
Create a quantized embedding_bag module from a float module
Parameters:
**mod** (*Module*) -- a float module, either produced by
torch.ao.quantization utilities or provided by user
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.EmbeddingBag.html | pytorch docs |
torch.linalg.slogdet
torch.linalg.slogdet(A, *, out=None)
Computes the sign and natural logarithm of the absolute value of
the determinant of a square matrix.
For complex "A", it returns the sign and the natural logarithm of
the modulus of the determinant, that is, a logarithmic polar
decomposition of the determinant.
The determinant can be recovered as sign * exp(logabsdet). When a
matrix has a determinant of zero, it returns (0, -inf).
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
See also:
"torch.linalg.det()" computes the determinant of square matrices.
Parameters:
A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions.
Keyword Arguments:
out (tuple, optional) -- output tuple of two tensors.
Ignored if None. Default: None. | https://pytorch.org/docs/stable/generated/torch.linalg.slogdet.html | pytorch docs |
Ignored if None. Default: None.
Returns:
A named tuple (sign, logabsdet).
*sign* will have the same dtype as "A".
*logabsdet* will always be real-valued, even when "A" is
complex.
Examples:
>>> A = torch.randn(3, 3)
>>> A
tensor([[ 0.0032, -0.2239, -1.1219],
[-0.6690, 0.1161, 0.4053],
[-1.6218, -0.9273, -0.0082]])
>>> torch.linalg.det(A)
tensor(-0.7576)
>>> torch.logdet(A)
tensor(nan)
>>> torch.linalg.slogdet(A)
torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776))
| https://pytorch.org/docs/stable/generated/torch.linalg.slogdet.html | pytorch docs |
torch.float_power
torch.float_power(input, exponent, *, out=None) -> Tensor
Raises "input" to the power of "exponent", elementwise, in double
precision. If neither input is complex returns a "torch.float64"
tensor, and if one or more inputs is complex returns a
"torch.complex128" tensor.
Note:
This function always computes in double precision, unlike
"torch.pow()", which implements more typical type promotion. This
is useful when the computation needs to be performed in a wider
or more precise dtype, or the results of the computation may
contain fractional values not representable in the input dtypes,
like when an integer base is raised to a negative integer
exponent.
Parameters:
* input (Tensor or Number) -- the base value(s)
* **exponent** (*Tensor** or **Number*) -- the exponent value(s)
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example: | https://pytorch.org/docs/stable/generated/torch.float_power.html | pytorch docs |
Example:
>>> a = torch.randint(10, (4,))
>>> a
tensor([6, 4, 7, 1])
>>> torch.float_power(a, 2)
tensor([36., 16., 49., 1.], dtype=torch.float64)
>>> a = torch.arange(1, 5)
>>> a
tensor([ 1, 2, 3, 4])
>>> exp = torch.tensor([2, -3, 4, -5])
>>> exp
tensor([ 2, -3, 4, -5])
>>> torch.float_power(a, exp)
tensor([1.0000e+00, 1.2500e-01, 8.1000e+01, 9.7656e-04], dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.float_power.html | pytorch docs |
ConvReLU2d
class torch.ao.nn.intrinsic.ConvReLU2d(conv, relu)
This is a sequential container which calls the Conv2d and ReLU
modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU2d.html | pytorch docs |
torch.istft
torch.istft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) -> Tensor:
Inverse short time Fourier Transform. This is expected to be the
inverse of "stft()".
It has the same parameters (+ additional optional parameter of
"length") and it should return the least squares estimation of the
original signal. The algorithm will check using the NOLA condition
( nonzero overlap).
Important consideration in the parameters "window" and "center" so
that the envelop created by the summation of all the windows is
never zero at certain point in time. Specifically,
\sum_{t=-\infty}^{\infty} |w|^2[n-t\times hop_length] \cancel{=}
0.
Since "stft()" discards elements at the end of the signal if they
do not fit in a frame, "istft" may return a shorter signal than the
original signal (can occur if "center" is False since the signal | https://pytorch.org/docs/stable/generated/torch.istft.html | pytorch docs |
isn't padded). If length is given in the arguments and is longer
than expected, "istft" will pad zeros to the end of the returned
signal.
If "center" is "True", then there will be padding e.g.
"'constant'", "'reflect'", etc. Left padding can be trimmed off
exactly because they can be calculated but right padding cannot be
calculated without additional information.
Example: Suppose the last window is: "[17, 18, 0, 0, 0]" vs "[18,
0, 0, 0, 0]"
The "n_fft", "hop_length", "win_length" are all the same which
prevents the calculation of right padding. These additional values
could be zeros or a reflection of the signal so providing "length"
could be useful. If "length" is "None" then padding will be
aggressively removed (some loss of signal).
[1] D. W. Griffin and J. S. Lim, "Signal estimation from modified
short-time Fourier transform," IEEE Trans. ASSP, vol.32, no.2,
pp.236-243, Apr. 1984.
Parameters:
* input (Tensor) -- | https://pytorch.org/docs/stable/generated/torch.istft.html | pytorch docs |
Parameters:
* input (Tensor) --
The input tensor. Expected to be in the format of "stft()",
output. That is a complex tensor of shape ("channel",
"fft_size", "n_frame"), where the "channel" dimension is
optional.
Changed in version 2.0: Real datatype inputs are no longer
supported. Input must now have a complex datatype, as returned
by "stft(..., return_complex=True)".
* **n_fft** (*int*) -- Size of Fourier transform
* **hop_length** (*Optional**[**int**]*) -- The distance between
neighboring sliding window frames. (Default: "n_fft // 4")
* **win_length** (*Optional**[**int**]*) -- The size of window
frame and STFT filter. (Default: "n_fft")
* **window** (*Optional**[**torch.Tensor**]*) -- The optional
window function. (Default: "torch.ones(win_length)")
* **center** (*bool*) -- Whether "input" was padded on both
sides so that the t-th frame is centered at time t \times
| https://pytorch.org/docs/stable/generated/torch.istft.html | pytorch docs |
\text{hop_length}. (Default: "True")
* **normalized** (*bool*) -- Whether the STFT was normalized.
(Default: "False")
* **onesided** (*Optional**[**bool**]*) -- Whether the STFT was
onesided. (Default: "True" if "n_fft != fft_size" in the input
size)
* **length** (*Optional**[**int**]*) -- The amount to trim the
signal by (i.e. the original signal length). (Default: whole
signal)
* **return_complex** (*Optional**[**bool**]*) -- Whether the
output should be complex, or if the input should be assumed to
derive from a real signal and window. Note that this is
incompatible with "onesided=True". (Default: "False")
Returns:
Least squares estimation of the original signal of size (...,
signal_length)
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.istft.html | pytorch docs |
Softmax
class torch.nn.Softmax(dim=None)
Applies the Softmax function to an n-dimensional input Tensor
rescaling them so that the elements of the n-dimensional output
Tensor lie in the range [0,1] and sum to 1.
Softmax is defined as:
\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
When the input Tensor is a sparse tensor then the unspecified
values are treated as "-inf".
Shape:
* Input: (*) where *** means, any number of additional
dimensions
* Output: (*), same shape as the input
Returns:
a Tensor of the same dimension and shape as the input with
values in the range [0, 1]
Parameters:
dim (int) -- A dimension along which Softmax will be
computed (so every slice along dim will sum to 1).
Return type:
None
Note:
This module doesn't work directly with NLLLoss, which expects the
Log to be computed between the Softmax and itself. Use
| https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html | pytorch docs |
LogSoftmax instead (it's faster and has better numerical
properties).
Examples:
>>> m = nn.Softmax(dim=1)
>>> input = torch.randn(2, 3)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html | pytorch docs |
torch.empty_like
torch.empty_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor
Returns an uninitialized tensor with the same size as "input".
"torch.empty_like(input)" is equivalent to
"torch.empty(input.size(), dtype=input.dtype, layout=input.layout,
device=input.device)".
Parameters:
input (Tensor) -- the size of "input" will determine size
of the output tensor.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned Tensor. Default: if "None", defaults to the dtype
of "input".
* **layout** ("torch.layout", optional) -- the desired layout of
returned tensor. Default: if "None", defaults to the layout of
"input".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", defaults to the device of
"input".
| https://pytorch.org/docs/stable/generated/torch.empty_like.html | pytorch docs |
"input".
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **memory_format** ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
Example:
>>> a=torch.empty((2,3), dtype=torch.int32, device = 'cuda')
>>> torch.empty_like(a)
tensor([[0, 0, 0],
[0, 0, 0]], device='cuda:0', dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.empty_like.html | pytorch docs |
torch.linalg.matrix_exp
torch.linalg.matrix_exp(A) -> Tensor
Computes the matrix exponential of a square matrix.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, this function
computes the matrix exponential of A \in \mathbb{K}^{n \times
n}, which is defined as
\mathrm{matrix_exp}(A) = \sum_{k=0}^\infty \frac{1}{k!}A^k \in
\mathbb{K}^{n \times n}.
If the matrix A has eigenvalues \lambda_i \in \mathbb{C}, the
matrix \mathrm{matrix_exp}(A) has eigenvalues e^{\lambda_i} \in
\mathbb{C}.
Supports input of bfloat16, float, double, cfloat and cdouble
dtypes. Also supports batches of matrices, and if "A" is a batch of
matrices then the output has the same batch dimensions.
Parameters:
A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions.
Example:
>>> A = torch.empty(2, 2, 2)
>>> A[0, :, :] = torch.eye(2, 2)
>>> A[1, :, :] = 2 * torch.eye(2, 2)
>>> A
| https://pytorch.org/docs/stable/generated/torch.linalg.matrix_exp.html | pytorch docs |
A
tensor([[[1., 0.],
[0., 1.]],
[[2., 0.],
[0., 2.]]])
>>> torch.linalg.matrix_exp(A)
tensor([[[2.7183, 0.0000],
[0.0000, 2.7183]],
[[7.3891, 0.0000],
[0.0000, 7.3891]]])
>>> import math
>>> A = torch.tensor([[0, math.pi/3], [-math.pi/3, 0]]) # A is skew-symmetric
>>> torch.linalg.matrix_exp(A) # matrix_exp(A) = [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]]
tensor([[ 0.5000, 0.8660],
[-0.8660, 0.5000]])
| https://pytorch.org/docs/stable/generated/torch.linalg.matrix_exp.html | pytorch docs |
torch.jit.unused
torch.jit.unused(fn)
This decorator indicates to the compiler that a function or method
should be ignored and replaced with the raising of an exception.
This allows you to leave code in your model that is not yet
TorchScript compatible and still export your model.
Example (using "@torch.jit.unused" on a method):
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self, use_memory_efficient):
super(MyModule, self).__init__()
self.use_memory_efficient = use_memory_efficient
@torch.jit.unused
def memory_efficient(self, x):
import pdb
pdb.set_trace()
return x + 10
def forward(self, x):
# Use not-yet-scriptable memory efficient mode
if self.use_memory_efficient:
return self.memory_efficient(x)
else:
| https://pytorch.org/docs/stable/generated/torch.jit.unused.html | pytorch docs |
else:
return x + 10
m = torch.jit.script(MyModule(use_memory_efficient=False))
m.save("m.pt")
m = torch.jit.script(MyModule(use_memory_efficient=True))
# exception raised
m(torch.rand(100))
| https://pytorch.org/docs/stable/generated/torch.jit.unused.html | pytorch docs |
FXFloatFunctional
class torch.ao.nn.quantized.FXFloatFunctional
module to replace FloatFunctional module before FX graph mode
quantization, since activation_post_process will be inserted in top
level module directly
Valid operation names:
* add
* cat
* mul
* add_relu
* add_scalar
* mul_scalar
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.FXFloatFunctional.html | pytorch docs |
fuse_modules
class torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=, fuse_custom_config_dict=None)
Fuses a list of modules into a single module
Fuses only the following sequence of modules: conv, bn conv, bn,
relu conv, relu linear, relu bn, relu All other sequences are left
unchanged. For these sequences, replaces the first item in the list
with the fused module, replacing the rest of the modules with
identity.
Parameters:
* model -- Model containing the modules to be fused
* **modules_to_fuse** -- list of list of module names to fuse.
Can also be a list of strings if there is only a single list
of modules to fuse.
* **inplace** -- bool specifying if fusion happens in place on
the model, by default a new model is returned
* **fuser_func** -- Function that takes in a list of modules and
outputs a list of fused modules of the same length. For
| https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html | pytorch docs |
example, fuser_func([convModule, BNModule]) returns the list
[ConvBNModule, nn.Identity()] Defaults to
torch.ao.quantization.fuse_known_modules
* **fuse_custom_config_dict** -- custom configuration for fusion
# Example of fuse_custom_config_dict
fuse_custom_config_dict = {
# Additional fuser_method mapping
"additional_fuser_method_mapping": {
(torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn
},
}
Returns:
model with fused modules. A new copy is created if inplace=True.
Examples:
>>> m = M().eval()
>>> # m is a module containing the sub-modules below
>>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]
>>> fused_m = torch.ao.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
>>> m = M().eval()
>>> # Alternately provide a single list of modules to fuse
| https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html | pytorch docs |
modules_to_fuse = ['conv1', 'bn1', 'relu1']
>>> fused_m = torch.ao.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
| https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html | pytorch docs |
torch.round
torch.round(input, *, decimals=0, out=None) -> Tensor
Rounds elements of "input" to the nearest integer.
For integer inputs, follows the array-api convention of returning a
copy of the input tensor.
Note:
This function implements the "round half to even" to break ties
when a number is equidistant from two integers (e.g. *round(2.5)*
is 2).When the :attr:`decimals` argument is specified the
algorithm used is similar to NumPy's *around*. This algorithm is
fast but inexact and it can easily overflow for low precision
dtypes. Eg. *round(tensor([10000], dtype=torch.float16),
decimals=3)* is *inf*.
See also:
"torch.ceil()", which rounds up. "torch.floor()", which rounds
down. "torch.trunc()", which rounds towards zero.
Parameters:
* input (Tensor) -- the input tensor.
* **decimals** (*int*) -- Number of decimal places to round to
(default: 0). If decimals is negative, it specifies the number
| https://pytorch.org/docs/stable/generated/torch.round.html | pytorch docs |
of positions to the left of the decimal point.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.round(torch.tensor((4.7, -2.3, 9.1, -7.7)))
tensor([ 5., -2., 9., -8.])
>>> # Values equidistant from two integers are rounded towards the
>>> # the nearest even value (zero is treated as even)
>>> torch.round(torch.tensor([-0.5, 0.5, 1.5, 2.5]))
tensor([-0., 0., 2., 2.])
>>> # A positive decimals argument rounds to the to that decimal place
>>> torch.round(torch.tensor([0.1234567]), decimals=3)
tensor([0.1230])
>>> # A negative decimals argument rounds to the left of the decimal
>>> torch.round(torch.tensor([1200.1234567]), decimals=-3)
tensor([1000.])
| https://pytorch.org/docs/stable/generated/torch.round.html | pytorch docs |
torch.is_tensor
torch.is_tensor(obj)
Returns True if obj is a PyTorch tensor.
Note that this function is simply doing "isinstance(obj, Tensor)".
Using that "isinstance" check is better for typechecking with mypy,
and more explicit - so it's recommended to use that instead of
"is_tensor".
Parameters:
obj (Object) -- Object to test
Example:
>>> x = torch.tensor([1, 2, 3])
>>> torch.is_tensor(x)
True
| https://pytorch.org/docs/stable/generated/torch.is_tensor.html | pytorch docs |
torch._foreach_sin
torch._foreach_sin(self: List[Tensor]) -> List[Tensor]
Apply "torch.sin()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_sin.html | pytorch docs |
FractionalMaxPool2d
class torch.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
Applies a 2D fractional max pooling over an input signal composed
of several input planes.
Fractional MaxPooling is described in detail in the paper
Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in kH \times kW regions by a
stochastic step size determined by the target output size. The
number of output features is equal to the number of input planes.
Parameters:
* kernel_size (Union[int, Tuple[int,
int]]) -- the size of the window to take a max over.
Can be a single number k (for a square kernel of k x k) or a
tuple (kh, kw)
* **output_size** (*Union**[**int**, **Tuple**[**int**,
**int**]**]*) -- the target output size of the image of the
form *oH x oW*. Can be a tuple *(oH, oW)* or a single number
| https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html | pytorch docs |
oH for a square image oH x oH
* **output_ratio** (*Union**[**float**, **Tuple**[**float**,
**float**]**]*) -- If one wants to have an output size as a
ratio of the input size, this option can be given. This has to
be a number or tuple in the range (0, 1)
* **return_indices** (*bool*) -- if "True", will return the
indices along with the outputs. Useful to pass to
"nn.MaxUnpool2d()". Default: "False"
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where (H_{out}, W_{out})=\text{output\_size} or (H_{out},
W_{out})=\text{output\_ratio} \times (H_{in}, W_{in}).
-[ Examples ]-
pool of square window of size=3, and target output size 13x12
m = nn.FractionalMaxPool2d(3, output_size=(13, 12))
pool of square window and target output size being half of input image size
| https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html | pytorch docs |
m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5))
input = torch.randn(20, 16, 50, 32)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html | pytorch docs |
torch.repeat_interleave
torch.repeat_interleave(input, repeats, dim=None, *, output_size=None) -> Tensor
Repeat elements of a tensor.
Warning:
This is different from "torch.Tensor.repeat()" but similar to
"numpy.repeat".
Parameters:
* input (Tensor) -- the input tensor.
* **repeats** (*Tensor** or **int*) -- The number of repetitions
for each element. repeats is broadcasted to fit the shape of
the given axis.
* **dim** (*int**, **optional*) -- The dimension along which to
repeat values. By default, use the flattened input array, and
return a flat output array.
Keyword Arguments:
output_size (int, optional) -- Total output size for
the given axis ( e.g. sum of repeats). If given, it will avoid
stream synchronization needed to calculate output shape of the
tensor.
Returns:
Repeated tensor which has the same shape as input, except along
the given axis. | https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html | pytorch docs |
the given axis.
Return type:
Tensor
Example:
>>> x = torch.tensor([1, 2, 3])
>>> x.repeat_interleave(2)
tensor([1, 1, 2, 2, 3, 3])
>>> y = torch.tensor([[1, 2], [3, 4]])
>>> torch.repeat_interleave(y, 2)
tensor([1, 1, 2, 2, 3, 3, 4, 4])
>>> torch.repeat_interleave(y, 3, dim=1)
tensor([[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4]])
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)
tensor([[1, 2],
[3, 4],
[3, 4]])
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0, output_size=3)
tensor([[1, 2],
[3, 4],
[3, 4]])
torch.repeat_interleave(repeats, *, output_size=None) -> Tensor
If the repeats is tensor([n1, n2, n3, ...]), then the output
will be tensor([0, 0, ..., 1, 1, ..., 2, 2, ..., ...]) where 0
appears n1 times, 1 appears n2 times, 2 appears n3 times,
etc. | https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html | pytorch docs |