text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.cuda.is_available
torch.cuda.is_available()
Returns a bool indicating if CUDA is currently available.
Return type:
bool | https://pytorch.org/docs/stable/generated/torch.cuda.is_available.html | pytorch docs |
torch.Tensor.norm
Tensor.norm(p='fro', dim=None, keepdim=False, dtype=None)
See "torch.norm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.norm.html | pytorch docs |
torch.Tensor.arccosh
Tensor.arccosh()
acosh() -> Tensor
See "torch.arccosh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arccosh.html | pytorch docs |
torch.Tensor.nelement
Tensor.nelement() -> int
Alias for "numel()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nelement.html | pytorch docs |
torch.nn.functional.relu
torch.nn.functional.relu(input, inplace=False) -> Tensor
Applies the rectified linear unit function element-wise. See "ReLU"
for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.relu.html | pytorch docs |
torch.sym_max
torch.sym_max(a, b)
SymInt-aware utility for max(). | https://pytorch.org/docs/stable/generated/torch.sym_max.html | pytorch docs |
torch.clamp
torch.clamp(input, min=None, max=None, *, out=None) -> Tensor
Clamps all elements in "input" into the range [ "min", "max" ].
Letting min_value and max_value be "min" and "max", respectively,
this returns:
y_i = \min(\max(x_i, \text{min\_value}_i), \text{max\_value}_i)
If "min" is "None", there is no lower bound. Or, if "max" is "None"
there is no upper bound.
Note:
If "min" is greater than "max" "torch.clamp(..., min, max)" sets
all elements in "input" to the value of "max".
Parameters:
* input (Tensor) -- the input tensor.
* **min** (*Number** or **Tensor**, **optional*) -- lower-bound
of the range to be clamped to
* **max** (*Number** or **Tensor**, **optional*) -- upper-bound
of the range to be clamped to
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-1.7120, 0.1734, -0.0478, -0.0922])
| https://pytorch.org/docs/stable/generated/torch.clamp.html | pytorch docs |
tensor([-1.7120, 0.1734, -0.0478, -0.0922])
>>> torch.clamp(a, min=-0.5, max=0.5)
tensor([-0.5000, 0.1734, -0.0478, -0.0922])
>>> min = torch.linspace(-1, 1, steps=4)
>>> torch.clamp(a, min=min)
tensor([-1.0000, 0.1734, 0.3333, 1.0000])
| https://pytorch.org/docs/stable/generated/torch.clamp.html | pytorch docs |
torch.Tensor.mode
Tensor.mode(dim=None, keepdim=False)
See "torch.mode()" | https://pytorch.org/docs/stable/generated/torch.Tensor.mode.html | pytorch docs |
L1Unstructured
class torch.nn.utils.prune.L1Unstructured(amount)
Prune (currently unpruned) units in a tensor by zeroing out the
ones with the lowest L1-norm.
Parameters:
amount (int or float) -- quantity of parameters to
prune. If "float", should be between 0.0 and 1.0 and represent
the fraction of parameters to prune. If "int", it represents the
absolute number of parameters to prune.
classmethod apply(module, name, amount, importance_scores=None)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
* **module** (*nn.Module*) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **amount** (*int** or **float*) -- quantity of parameters
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html | pytorch docs |
to prune. If "float", should be between 0.0 and 1.0 and
represent the fraction of parameters to prune. If "int", it
represents the absolute number of parameters to prune.
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as module parameter) used
to compute mask for pruning. The values in this tensor
indicate the importance of the corresponding elements in
the parameter being pruned. If unspecified or None, the
module parameter will be used in its place.
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
Return type:
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html | pytorch docs |
Return type:
pruned_tensor (torch.Tensor)
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html | pytorch docs |
Returns:
pruned version of tensor "t".
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html | pytorch docs |
torch.cuda.set_device
torch.cuda.set_device(device)
Sets the current device.
Usage of this function is discouraged in favor of "device". In most
cases it's better to use "CUDA_VISIBLE_DEVICES" environmental
variable.
Parameters:
device (torch.device or int) -- selected device. This
function is a no-op if this argument is negative. | https://pytorch.org/docs/stable/generated/torch.cuda.set_device.html | pytorch docs |
torch.Tensor.i0
Tensor.i0() -> Tensor
See "torch.i0()" | https://pytorch.org/docs/stable/generated/torch.Tensor.i0.html | pytorch docs |
torch.Tensor.orgqr
Tensor.orgqr(input2) -> Tensor
See "torch.orgqr()" | https://pytorch.org/docs/stable/generated/torch.Tensor.orgqr.html | pytorch docs |
torch.Tensor.signbit
Tensor.signbit() -> Tensor
See "torch.signbit()" | https://pytorch.org/docs/stable/generated/torch.Tensor.signbit.html | pytorch docs |
torch.Tensor.dequantize
Tensor.dequantize() -> Tensor
Given a quantized Tensor, dequantize it and return the dequantized
float Tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.dequantize.html | pytorch docs |
torch.fft.fft2
torch.fft.fft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor
Computes the 2 dimensional discrete Fourier transform of "input".
Equivalent to "fftn()" but FFTs only the last two dimensions by
default.
Note:
The Fourier domain representation of any real signal satisfies
the Hermitian property: "X[i, j] = conj(X[-i, -j])". This
function always returns all positive and negative frequency terms
even though, for real inputs, half of these values are redundant.
"rfft2()" returns the more compact one-sided representation where
only the positive frequencies of the last dimension are returned.
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
| https://pytorch.org/docs/stable/generated/torch.fft.fft2.html | pytorch docs |
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the FFT. If a length "-1" is specified, no padding
is done in that dimension. Default: "s = [input.size(d) for d
in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. Default: last two dimensions.
* **norm** (*str**, **optional*) --
Normalization mode. For the forward transform ("fft2()"),
these correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the FFT
orthonormal)
Where "n = prod(s)" is the logical FFT size. Calling the
backward transform ("ifft2()") with the same normalization
mode will apply an overall normalization of "1/n" between the
two transforms. This is required to make "ifft2()" the exact
inverse.
| https://pytorch.org/docs/stable/generated/torch.fft.fft2.html | pytorch docs |
inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
x = torch.rand(10, 10, dtype=torch.complex64)
fft2 = torch.fft.fft2(x)
The discrete Fourier transform is separable, so "fft2()" here is
equivalent to two one-dimensional "fft()" calls:
two_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1)
torch.testing.assert_close(fft2, two_ffts, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.fft2.html | pytorch docs |
LazyConvTranspose2d
class torch.nn.LazyConvTranspose2d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
A "torch.nn.ConvTranspose2d" module with lazy initialization of the
"in_channels" argument of the "ConvTranspose2d" that is inferred
from the "input.size(1)". The attributes that will be lazily
initialized are weight and bias.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* out_channels (int) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- "dilation *
(kernel_size - 1) - padding" zero-padding will be added to
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose2d.html | pytorch docs |
both sides of each dimension in the input. Default: 0
* **output_padding** (*int** or **tuple**, **optional*) --
Additional size added to one side of each dimension in the
output shape. Default: 0
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
See also:
"torch.nn.ConvTranspose2d" and
"torch.nn.modules.lazy.LazyModuleMixin"
cls_to_become
alias of "ConvTranspose2d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose2d.html | pytorch docs |
torch.Tensor.ndimension
Tensor.ndimension() -> int
Alias for "dim()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ndimension.html | pytorch docs |
torch.Tensor.reciprocal_
Tensor.reciprocal_() -> Tensor
In-place version of "reciprocal()" | https://pytorch.org/docs/stable/generated/torch.Tensor.reciprocal_.html | pytorch docs |
torch.Tensor.minimum
Tensor.minimum(other) -> Tensor
See "torch.minimum()" | https://pytorch.org/docs/stable/generated/torch.Tensor.minimum.html | pytorch docs |
torch._foreach_erf
torch._foreach_erf(self: List[Tensor]) -> List[Tensor]
Apply "torch.erf()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_erf.html | pytorch docs |
torch.jit.freeze
torch.jit.freeze(mod, preserved_attrs=None, optimize_numerics=True)
Freezing a "ScriptModule" will clone it and attempt to inline the
cloned module's submodules, parameters, and attributes as constants
in the TorchScript IR Graph. By default, forward will be
preserved, as well as attributes & methods specified in
preserved_attrs. Additionally, any attribute that is modified
within a preserved method will be preserved.
Freezing currently only accepts ScriptModules that are in eval
mode.
Freezing applies generic optimization that will speed up your model
regardless of machine. To further optimize using server-specific
settings, run optimize_for_inference after freezing.
Parameters:
* mod ("ScriptModule") -- a module to be frozen
* **preserved_attrs** (*Optional**[**List**[**str**]**]*) -- a
list of attributes to preserve in addition to the forward
| https://pytorch.org/docs/stable/generated/torch.jit.freeze.html | pytorch docs |
method. Attributes modified in preserved methods will also be
preserved.
* **optimize_numerics** (*bool*) -- If "True", a set of
optimization passes will be run that does not strictly
preserve numerics. Full details of optimization can be found
at *torch.jit.run_frozen_optimizations*.
Returns:
Frozen "ScriptModule".
Example (Freezing a simple module with a Parameter):
def forward(self, input):
output = self.weight.mm(input)
output = self.linear(output)
return output
scripted_module = torch.jit.script(MyModule(2, 3).eval())
frozen_module = torch.jit.freeze(scripted_module)
# parameters have been removed and inlined into the Graph as constants
assert len(list(frozen_module.named_parameters())) == 0
# See the compiled graph as Python code
print(frozen_module.code)
Example (Freezing a module with preserved attributes)
def forward(self, input):
| https://pytorch.org/docs/stable/generated/torch.jit.freeze.html | pytorch docs |
def forward(self, input):
self.modified_tensor += 1
return input + self.modified_tensor
scripted_module = torch.jit.script(MyModule2().eval())
frozen_module = torch.jit.freeze(scripted_module, preserved_attrs=["version"])
# we've manually preserved `version`, so it still exists on the frozen module and can be modified
assert frozen_module.version == 1
frozen_module.version = 2
# `modified_tensor` is detected as being mutated in the forward, so freezing preserves
# it to retain model semantics
assert frozen_module(torch.tensor(1)) == torch.tensor(12)
# now that we've run it once, the next result will be incremented by one
assert frozen_module(torch.tensor(1)) == torch.tensor(13)
Note:
Freezing submodule attributes is also supported: frozen_module =
torch.jit.freeze(scripted_module,
preserved_attrs=["submodule.version"])
Note: | https://pytorch.org/docs/stable/generated/torch.jit.freeze.html | pytorch docs |
Note:
If you're not sure why an attribute is not being inlined as a
constant, you can run *dump_alias_db* on
frozen_module.forward.graph to see if freezing has detected the
attribute is being modified.
Note:
Because freezing makes weights constants and removes module
hierarchy, *to* and other nn.Module methods to manipulate device
or dtype no longer work. As a workaround, You can remap devices
by specifying *map_location* in *torch.jit.load*, however device-
specific logic may have been baked into the model.
| https://pytorch.org/docs/stable/generated/torch.jit.freeze.html | pytorch docs |
torch.cuda.comm.gather
torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None)
Gathers tensors from multiple GPU devices.
Parameters:
* tensors (Iterable[Tensor]) -- an iterable of
tensors to gather. Tensor sizes in all dimensions other than
"dim" have to match.
* **dim** (*int**, **optional*) -- a dimension along which the
tensors will be concatenated. Default: "0".
* **destination** (*torch.device**, **str**, or **int**,
**optional*) -- the output device. Can be CPU or CUDA.
Default: the current CUDA device.
* **out** (*Tensor**, **optional**, **keyword-only*) -- the
tensor to store gather result. Its sizes must match those of
"tensors", except for "dim", where the size must equal
"sum(tensor.size(dim) for tensor in tensors)". Can be on CPU
or CUDA.
Note:
"destination" must not be specified when "out" is specified.
Returns: | https://pytorch.org/docs/stable/generated/torch.cuda.comm.gather.html | pytorch docs |
Returns:
* If "destination" is specified,
a tensor located on "destination" device, that is a result
of concatenating "tensors" along "dim".
* If "out" is specified,
the "out" tensor, now containing results of concatenating
"tensors" along "dim".
| https://pytorch.org/docs/stable/generated/torch.cuda.comm.gather.html | pytorch docs |
GraphInfo
class torch.onnx.verification.GraphInfo(graph, input_args, params_dict, export_options=, id='', _EXCLUDED_NODE_KINDS=frozenset({'aten::ScalarImplicit', 'prim::Constant', 'prim::ListConstruct'}))
GraphInfo contains validation information of a TorchScript graph
and its converted ONNX graph.
all_mismatch_leaf_graph_info()
Return a list of all leaf *GraphInfo* objects that have
mismatch.
Return type:
*List*[*GraphInfo*]
clear()
Clear states and results of previous verification.
essential_node_count()
Return the number of nodes in the subgraph excluding those in
*_EXCLUDED_NODE_KINDS*.
Return type:
int
essential_node_kinds()
Return the set of node kinds in the subgraph excluding those in
*_EXCLUDED_NODE_KINDS*.
Return type:
*Set*[str]
export_repro(repro_dir=None, name=None)
Export the subgraph to ONNX along with the input/output data for
repro.
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html | pytorch docs |
repro.
The repro directory will contain the following files:
dir
âââ test_<name>
â âââ model.onnx
â âââ test_data_set_0
â âââ input_0.pb
â âââ input_1.pb
â âââ output_0.pb
â âââ output_1.pb
Parameters:
* **repro_dir** (*Optional**[**str**]*) -- The directory to
export the repro files to. Defaults to current working
directory if None.
* **name** (*Optional**[**str**]*) -- An optional name for
the test case folder: "test_{name}".
Returns:
The path to the exported repro directory.
Return type:
str
find_mismatch(options=None)
Find all mismatches between the TorchScript IR graph and the
exported onnx model.
Binary searches the model graph to find the minimal subgraph
that exhibits the mismatch. A *GraphInfo* object is created for
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html | pytorch docs |
each subgraph, recording the test inputs and export options, as
well as the validation results.
Parameters:
**options** (*Optional**[**VerificationOptions**]*) -- The
verification options.
find_partition(id)
Find the *GraphInfo* object with the given id.
Return type:
*Optional*[*GraphInfo*]
has_mismatch()
Return True if the subgraph has output mismatch between torch
and ONNX.
Return type:
bool
pretty_print_mismatch(graph=False)
Pretty print details of the mismatch between torch and ONNX.
Parameters:
**graph** (*bool*) -- If True, print the ATen JIT graph and
ONNX graph.
pretty_print_tree()
Pretty print *GraphInfo* tree.
Each node represents a subgraph, showing the number of nodes in
the subgraph and a check mark if the subgraph has output
mismatch between torch and ONNX.
The id of the subgraph is shown under the node. The *GraphInfo*
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html | pytorch docs |
object for any subgraph can be retrieved by calling
graph_info.find_partition(id).
Example:
==================================== Tree: =====================================
5 X __2 X __1 â
id: | id: 0 | id: 00
| |
| |__1 X (aten::relu)
| id: 01
|
|__3 X __1 â
id: 1 | id: 10
|
|__2 X __1 X (aten::relu)
id: 11 | id: 110
|
|__1 â
id: 111
=========================== Mismatch leaf subgraphs: ===========================
['01', '110']
============================= Mismatch node kinds: =============================
{'aten::relu': 2}
verify_export(options)
Verify the export from TorchScript IR graph to ONNX.
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html | pytorch docs |
Export the TorchScript IR graph to ONNX, with the inputs,
parameters and export options recorded in this object. Then
verify the exported ONNX graph against the original TorchScript
IR graph under the provided verification options.
Parameters:
**options** (*VerificationOptions*) -- The verification
options.
Returns:
The AssertionError raised during the verification. Returns
None if no error is raised. onnx_graph: The exported ONNX
graph in TorchScript IR format. onnx_outs: The outputs from
running exported ONNX model under the onnx backend in
*options*. pt_outs: The outputs from running the TorchScript
IR graph.
Return type:
error
| https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html | pytorch docs |
Threshold
class torch.nn.Threshold(threshold, value, inplace=False)
Thresholds each element of the input Tensor.
Threshold is defined as:
y = \begin{cases} x, &\text{ if } x > \text{threshold} \\
\text{value}, &\text{ otherwise } \end{cases}
Parameters:
* threshold (float) -- The value to threshold at
* **value** (*float*) -- The value to replace with
* **inplace** (*bool*) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
Examples:
>>> m = nn.Threshold(0.1, 20)
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Threshold.html | pytorch docs |
torch.addmv
torch.addmv(input, mat, vec, *, beta=1, alpha=1, out=None) -> Tensor
Performs a matrix-vector product of the matrix "mat" and the vector
"vec". The vector "input" is added to the final result.
If "mat" is a (n \times m) tensor, "vec" is a 1-D tensor of size
m, then "input" must be broadcastable with a 1-D tensor of size
n and "out" will be 1-D tensor of size n.
"alpha" and "beta" are scaling factors on matrix-vector product
between "mat" and "vec" and the added tensor "input" respectively.
\text{out} = \beta\ \text{input} + \alpha\ (\text{mat}
\mathbin{@} \text{vec})
If "beta" is 0, then "input" will be ignored, and nan and inf
in it will not be propagated.
For inputs of type FloatTensor or DoubleTensor, arguments
"beta" and "alpha" must be real numbers, otherwise they should be
integers.
Parameters:
* input (Tensor) -- vector to be added
* **mat** (*Tensor*) -- matrix to be matrix multiplied
| https://pytorch.org/docs/stable/generated/torch.addmv.html | pytorch docs |
vec (Tensor) -- vector to be matrix multiplied
Keyword Arguments:
* beta (Number, optional) -- multiplier for "input"
(\beta)
* **alpha** (*Number**, **optional*) -- multiplier for mat @ vec
(\alpha)
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> M = torch.randn(2)
>>> mat = torch.randn(2, 3)
>>> vec = torch.randn(3)
>>> torch.addmv(M, mat, vec)
tensor([-0.3768, -5.5565])
| https://pytorch.org/docs/stable/generated/torch.addmv.html | pytorch docs |
torch.lu_unpack
torch.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True, *, out=None)
Unpacks the LU decomposition returned by "lu_factor()" into the P,
L, U matrices.
See also:
"lu()" returns the matrices from the LU decomposition. Its
gradient formula is more efficient than that of doing
"lu_factor()" followed by "lu_unpack()".
Parameters:
* LU_data (Tensor) -- the packed LU factorization data
* **LU_pivots** (*Tensor*) -- the packed LU factorization pivots
* **unpack_data** (*bool*) -- flag indicating if the data should
be unpacked. If "False", then the returned "L" and "U" are
empty tensors. Default: "True"
* **unpack_pivots** (*bool*) -- flag indicating if the pivots
should be unpacked into a permutation matrix "P". If "False",
then the returned "P" is an empty tensor. Default: "True"
Keyword Arguments:
out (tuple, optional) -- output tuple of three | https://pytorch.org/docs/stable/generated/torch.lu_unpack.html | pytorch docs |
tensors. Ignored if None.
Returns:
A namedtuple "(P, L, U)"
Examples:
>>> A = torch.randn(2, 3, 3)
>>> LU, pivots = torch.linalg.lu_factor(A)
>>> P, L, U = torch.lu_unpack(LU, pivots)
>>> # We can recover A from the factorization
>>> A_ = P @ L @ U
>>> torch.allclose(A, A_)
True
>>> # LU factorization of a rectangular matrix:
>>> A = torch.randn(2, 3, 2)
>>> LU, pivots = torch.linalg.lu_factor(A)
>>> P, L, U = torch.lu_unpack(LU, pivots)
>>> # P, L, U are the same as returned by linalg.lu
>>> P_, L_, U_ = torch.linalg.lu(A)
>>> torch.allclose(P, P_) and torch.allclose(L, L_) and torch.allclose(U, U_)
True
| https://pytorch.org/docs/stable/generated/torch.lu_unpack.html | pytorch docs |
LayerNorm
class torch.nn.LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True, device=None, dtype=None)
Applies Layer Normalization over a mini-batch of inputs as
described in the paper Layer Normalization
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The mean and standard-deviation are calculated over the last D
dimensions, where D is the dimension of "normalized_shape". For
example, if "normalized_shape" is "(3, 5)" (a 2-dimensional shape),
the mean and standard-deviation are computed over the last 2
dimensions of the input (i.e. "input.mean((-2, -1))"). \gamma and
\beta are learnable affine transform parameters of
"normalized_shape" if "elementwise_affine" is "True". The standard-
deviation is calculated via the biased estimator, equivalent to
torch.var(input, unbiased=False).
Note:
Unlike Batch Normalization and Instance Normalization, which
| https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html | pytorch docs |
applies scalar scale and bias for each entire channel/plane with
the "affine" option, Layer Normalization applies per-element
scale and bias with "elementwise_affine".
This layer uses statistics computed from input data in both
training and evaluation modes.
Parameters:
* normalized_shape (int or list or torch.Size) --
input shape from an expected input of size
[* \times \text{normalized\_shape}[0] \times
\text{normalized\_shape}[1] \times \ldots \times
\text{normalized\_shape}[-1]]
If a single integer is used, it is treated as a singleton
list, and this module will normalize over the last dimension
which is expected to be of that specific size.
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **elementwise_affine** (*bool*) -- a boolean value that when
set to "True", this module has learnable per-element affine
| https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html | pytorch docs |
parameters initialized to ones (for weights) and zeros (for
biases). Default: "True".
Variables:
* weight -- the learnable weights of the module of shape
\text{normalized_shape} when "elementwise_affine" is set to
"True". The values are initialized to 1.
* **bias** -- the learnable bias of the module of shape
\text{normalized\_shape} when "elementwise_affine" is set to
"True". The values are initialized to 0.
Shape:
* Input: (N, *)
* Output: (N, *) (same shape as input)
Examples:
>>> # NLP Example
>>> batch, sentence_length, embedding_dim = 20, 5, 10
>>> embedding = torch.randn(batch, sentence_length, embedding_dim)
>>> layer_norm = nn.LayerNorm(embedding_dim)
>>> # Activate module
>>> layer_norm(embedding)
>>>
>>> # Image Example
>>> N, C, H, W = 20, 5, 10, 10
>>> input = torch.randn(N, C, H, W)
| https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html | pytorch docs |
input = torch.randn(N, C, H, W)
>>> # Normalize over the last three dimensions (i.e. the channel and spatial dimensions)
>>> # as shown in the image below
>>> layer_norm = nn.LayerNorm([C, H, W])
>>> output = layer_norm(input)
[image] | https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html | pytorch docs |
torch.autograd.forward_ad.make_dual
torch.autograd.forward_ad.make_dual(tensor, tangent, *, level=None)
Associates a tensor value with a forward gradient, the tangent, to
create a "dual tensor", which is used to compute forward AD
gradients. The result is a new tensor aliased to "tensor" with
"tangent" embedded as an attribute as-is if it has the same storage
layout or copied otherwise. The tangent attribute can be recovered
with "unpack_dual()".
This function is backward differentiable.
Given a function f whose jacobian is J, it allows one to
compute the Jacobian-vector product (jvp) between J and a given
vector v as follows.
Example:
>>> with dual_level():
... inp = make_dual(x, v)
... out = f(inp)
... y, jvp = unpack_dual(out)
Please see the forward-mode AD tutorial for detailed steps on how
to use this API. | https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.make_dual.html | pytorch docs |
AdaptiveAvgPool1d
class torch.nn.AdaptiveAvgPool1d(output_size)
Applies a 1D adaptive average pooling over an input signal composed
of several input planes.
The output size is L_{out}, for any input size. The number of
output features is equal to the number of input planes.
Parameters:
output_size (Union[int, Tuple[int]]) --
the target output size L_{out}.
Shape:
* Input: (N, C, L_{in}) or (C, L_{in}).
* Output: (N, C, L_{out}) or (C, L_{out}), where
L_{out}=\text{output\_size}.
-[ Examples ]-
target output size of 5
m = nn.AdaptiveAvgPool1d(5)
input = torch.randn(1, 64, 8)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool1d.html | pytorch docs |
torch.Tensor.atan_
Tensor.atan_() -> Tensor
In-place version of "atan()" | https://pytorch.org/docs/stable/generated/torch.Tensor.atan_.html | pytorch docs |
torch.fft.ihfft2
torch.fft.ihfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor
Computes the 2-dimensional inverse discrete Fourier transform of
real "input". Equivalent to "ihfftn()" but transforms only the two
last dimensions by default.
Note:
Supports torch.half on CUDA with GPU Architecture SM53 or
greater. However it only supports powers of 2 signal length in
every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the Hermitian IFFT. If a length "-1" is specified,
no padding is done in that dimension. Default: "s =
[input.size(d) for d in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
| https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html | pytorch docs |
transformed. Default: last two dimensions.
* **norm** (*str**, **optional*) --
Normalization mode. For the backward transform ("ihfft2()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the Hermitian
IFFT orthonormal)
Where "n = prod(s)" is the logical IFFT size. Calling the
forward transform ("hfft2()") with the same normalization mode
will apply an overall normalization of "1/n" between the two
transforms. This is required to make "ihfft2()" the exact
inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
T = torch.rand(10, 10)
t = torch.fft.ihfft2(t)
t.size()
torch.Size([10, 6])
Compared against the full output from "ifft2()", the Hermitian | https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html | pytorch docs |
time-space signal takes up only half the space.
fftn = torch.fft.ifft2(t)
torch.allclose(fftn[..., :6], rfftn)
True
The discrete Fourier transform is separable, so "ihfft2()" here is
equivalent to a combination of "ifft()" and "ihfft()":
two_ffts = torch.fft.ifft(torch.fft.ihfft(t, dim=1), dim=0)
torch.allclose(t, two_ffts)
True
| https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html | pytorch docs |
BNReLU3d
class torch.ao.nn.intrinsic.BNReLU3d(batch_norm, relu)
This is a sequential container which calls the BatchNorm 3d and
ReLU modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.BNReLU3d.html | pytorch docs |
torch.nn.functional.rrelu_
torch.nn.functional.rrelu_(input, lower=1. / 8, upper=1. / 3, training=False) -> Tensor
In-place version of "rrelu()". | https://pytorch.org/docs/stable/generated/torch.nn.functional.rrelu_.html | pytorch docs |
torch.arcsinh
torch.arcsinh(input, *, out=None) -> Tensor
Alias for "torch.asinh()". | https://pytorch.org/docs/stable/generated/torch.arcsinh.html | pytorch docs |
LazyConv1d
class torch.nn.LazyConv1d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
A "torch.nn.Conv1d" module with lazy initialization of the
"in_channels" argument of the "Conv1d" that is inferred from the
"input.size(1)". The attributes that will be lazily initialized are
weight and bias.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* out_channels (int) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- Zero-padding
added to both sides of the input. Default: 0
* **padding_mode** (*str**, **optional*) -- "'zeros'",
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html | pytorch docs |
"'reflect'", "'replicate'" or "'circular'". Default: "'zeros'"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
See also:
"torch.nn.Conv1d" and "torch.nn.modules.lazy.LazyModuleMixin"
cls_to_become
alias of "Conv1d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html | pytorch docs |
torch.Tensor.permute
Tensor.permute(*dims) -> Tensor
See "torch.permute()" | https://pytorch.org/docs/stable/generated/torch.Tensor.permute.html | pytorch docs |
torch.sparse.softmax
torch.sparse.softmax(input, dim, *, dtype=None) -> Tensor
Applies a softmax function.
Softmax is defined as:
\text{Softmax}(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)}
where i, j run over sparse tensor indices and unspecified entries
are ignores. This is equivalent to defining unspecified entries as
negative infinity so that exp(x_k) = 0 when the entry with index k
has not specified.
It is applied to all slices along dim, and will re-scale them so
that the elements lie in the range [0, 1] and sum to 1.
Parameters:
* input (Tensor) -- input
* **dim** (*int*) -- A dimension along which softmax will be
computed.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is casted
to "dtype" before the operation is performed. This is useful
for preventing data type overflows. Default: None
| https://pytorch.org/docs/stable/generated/torch.sparse.softmax.html | pytorch docs |
L1Loss
class torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')
Creates a criterion that measures the mean absolute error (MAE)
between each element in the input x and target y.
The unreduced (i.e. with "reduction" set to "'none'") loss can be
described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n
- y_n \right|,
where N is the batch size. If "reduction" is not "'none'" (default
"'mean'"), then:
\ell(x, y) = \begin{cases} \operatorname{mean}(L), &
\text{if reduction} = \text{`mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
\end{cases}
x and y are tensors of arbitrary shapes with a total of n elements
each.
The sum operation still operates over all the elements, and divides
by n.
The division by n can be avoided if one sets "reduction = 'sum'".
Supports real-valued and complex-valued inputs.
Parameters: | https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html | pytorch docs |
Parameters:
* size_average (bool, optional) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
| https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html | pytorch docs |
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (*), where * means any number of dimensions.
* Target: (*), same shape as the input.
* Output: scalar. If "reduction" is "'none'", then (*), same
shape as the input.
Examples:
>>> loss = nn.L1Loss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html | pytorch docs |
torch.cuda.mem_get_info
torch.cuda.mem_get_info(device=None)
Returns the global free and total GPU memory occupied for a given
device using cudaMemGetInfo.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
Tuple[int, int]
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html | pytorch docs |
torch.Tensor.as_strided
Tensor.as_strided(size, stride, storage_offset=None) -> Tensor
See "torch.as_strided()" | https://pytorch.org/docs/stable/generated/torch.Tensor.as_strided.html | pytorch docs |
torch.isneginf
torch.isneginf(input, *, out=None) -> Tensor
Tests if each element of "input" is negative infinity or not.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([-float('inf'), float('inf'), 1.2])
>>> torch.isneginf(a)
tensor([ True, False, False])
| https://pytorch.org/docs/stable/generated/torch.isneginf.html | pytorch docs |
torch.divide
torch.divide(input, other, *, rounding_mode=None, out=None) -> Tensor
Alias for "torch.div()". | https://pytorch.org/docs/stable/generated/torch.divide.html | pytorch docs |
MaxPool1d
class torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
Applies a 1D max pooling over an input signal composed of several
input planes.
In the simplest case, the output value of the layer with input size
(N, C, L) and output (N, C, L_{out}) can be precisely described as:
out(N_i, C_j, k) = \max_{m=0, \ldots, \text{kernel\_size} - 1}
input(N_i, C_j, stride \times k + m)
If "padding" is non-zero, then the input is implicitly padded with
negative infinity on both sides for "padding" number of points.
"dilation" is the stride between the elements within the sliding
window. This link has a nice visualization of the pooling
parameters.
Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds
if they start within the left padding or the input. Sliding
windows that would start in the right padded region are ignored.
Parameters: | https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html | pytorch docs |
Parameters:
* kernel_size (Union[int, Tuple[int]]) --
The size of the sliding window, must be > 0.
* **stride** (*Union**[**int**, **Tuple**[**int**]**]*) -- The
stride of the sliding window, must be > 0. Default value is
"kernel_size".
* **padding** (*Union**[**int**, **Tuple**[**int**]**]*) --
Implicit negative infinity padding to be added on both sides,
must be >= 0 and <= kernel_size / 2.
* **dilation** (*Union**[**int**, **Tuple**[**int**]**]*) -- The
stride between elements within a sliding window, must be > 0.
* **return_indices** (*bool*) -- If "True", will return the
argmax along with the max values. Useful for
"torch.nn.MaxUnpool1d" later
* **ceil_mode** (*bool*) -- If "True", will use *ceil* instead
of *floor* to compute the output shape. This ensures that
every element in the input tensor is covered by a sliding
window.
Shape: | https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html | pytorch docs |
window.
Shape:
* Input: (N, C, L_{in}) or (C, L_{in}).
* Output: (N, C, L_{out}) or (C, L_{out}), where
L_{out} = \left\lfloor \frac{L_{in} + 2 \times
\text{padding} - \text{dilation} \times
(\text{kernel\_size} - 1) - 1}{\text{stride}} +
1\right\rfloor
Examples:
>>> # pool of size=3, stride=2
>>> m = nn.MaxPool1d(3, stride=2)
>>> input = torch.randn(20, 16, 50)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html | pytorch docs |
torch.atan2
torch.atan2(input, other, *, out=None) -> Tensor
Element-wise arctangent of \text{input}{i} / \text{other} with
consideration of the quadrant. Returns a new tensor with the signed
angles in radians between vector (\text{other}{i},
\text{input}) and vector (1, 0). (Note that \text{other}{i},
the second parameter, is the x-coordinate, while \text{input},
the first parameter, is the y-coordinate.)
The shapes of "input" and "other" must be broadcastable.
Parameters:
* input (Tensor) -- the first input tensor
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.9041, 0.0196, -0.3108, -2.4423])
>>> torch.atan2(a, torch.randn(4))
tensor([ 0.9833, 0.0811, -1.9743, -1.4151])
| https://pytorch.org/docs/stable/generated/torch.atan2.html | pytorch docs |
torch.nn.functional.multilabel_margin_loss
torch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor
See "MultiLabelMarginLoss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.multilabel_margin_loss.html | pytorch docs |
torch.Tensor.is_inference
Tensor.is_inference() -> bool
See "torch.is_inference()" | https://pytorch.org/docs/stable/generated/torch.Tensor.is_inference.html | pytorch docs |
torch.Tensor.sum
Tensor.sum(dim=None, keepdim=False, dtype=None) -> Tensor
See "torch.sum()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sum.html | pytorch docs |
default_fused_act_fake_quant
torch.quantization.fake_quantize.default_fused_act_fake_quant
alias of functools.partial(, observer=,
quant_min=0, quant_max=255, dtype=torch.quint8){} | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_act_fake_quant.html | pytorch docs |
torch.autograd.Function.vmap
static Function.vmap(info, in_dims, *args)
Defines a rule for the behavior of this autograd.Function
underneath "torch.vmap()". For a "torch.autograd.Function()" to
support "torch.vmap()", you must either override this staticmethod,
or set "generate_vmap_rule" to "True" (you may not do both).
If you choose to override this staticmethod: it must accept
an "info" object as the first argument. "info.batch_size"
specifies the size of the dimension being vmapped over, while
"info.randomness" is the randomness option passed to
"torch.vmap()".
an "in_dims" tuple as the second argument. For each arg in
"args", "in_dims" has a corresponding "Optional[int]". It is
"None" if the arg is not a Tensor or if the arg is not being
vmapped over, otherwise, it is an integer specifying what
dimension of the Tensor is being vmapped over.
"*args", which is the same as the args to "forward()".
| https://pytorch.org/docs/stable/generated/torch.autograd.Function.vmap.html | pytorch docs |
The return of the vmap staticmethod is a tuple of "(output,
out_dims)". Similar to "in_dims", "out_dims" should be of the same
structure as "output" and contain one "out_dim" per output that
specifies if the output has the vmapped dimension and what index it
is in.
Please see Extending torch.func with autograd.Function for more
details. | https://pytorch.org/docs/stable/generated/torch.autograd.Function.vmap.html | pytorch docs |
TripletMarginLoss
class torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
Creates a criterion that measures the triplet loss given an input
tensors x1, x2, x3 and a margin with a value greater than 0. This
is used for measuring a relative similarity between samples. A
triplet is composed by a, p and n (i.e., anchor, positive
examples and negative examples respectively). The shapes of all
input tensors should be (N, D).
The distance swap is described in detail in the paper Learning
shallow convolutional feature descriptors with triplet losses by V.
Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin},
0\}
where
d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p
See also "TripletMarginWithDistanceLoss", which computes the | https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html | pytorch docs |
triplet margin loss for input tensors using a custom distance
function.
Parameters:
* margin (float, optional) -- Default: 1.
* **p** (*int**, **optional*) -- The norm degree for pairwise
distance. Default: 2.
* **swap** (*bool**, **optional*) -- The distance swap is
described in detail in the paper *Learning shallow
convolutional feature descriptors with triplet losses* by V.
Balntas, E. Riba et al. Default: "False".
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
| https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html | pytorch docs |
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (N, D) or (D) where D is the vector dimension.
* Output: A Tensor of shape (N) if "reduction" is "'none'" and
input shape is (N, D); a scalar otherwise.
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html | pytorch docs |
Examples:
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2)
>>> anchor = torch.randn(100, 128, requires_grad=True)
>>> positive = torch.randn(100, 128, requires_grad=True)
>>> negative = torch.randn(100, 128, requires_grad=True)
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html | pytorch docs |
torch.Tensor.index_add
Tensor.index_add(dim, index, source, *, alpha=1) -> Tensor
Out-of-place version of "torch.Tensor.index_add_()". | https://pytorch.org/docs/stable/generated/torch.Tensor.index_add.html | pytorch docs |
torch.broadcast_shapes
torch.broadcast_shapes(*shapes) -> Size
Similar to "broadcast_tensors()" but for shapes.
This is equivalent to "torch.broadcast_tensors(*map(torch.empty,
shapes))[0].shape" but avoids the need create to intermediate
tensors. This is useful for broadcasting tensors of common batch
shape but different rightmost shape, e.g. to broadcast mean vectors
with covariance matrices.
Example:
>>> torch.broadcast_shapes((2,), (3, 1), (1, 1, 1))
torch.Size([1, 3, 2])
Parameters:
shapes (torch.Size*) -- Shapes of tensors.
Returns:
A shape compatible with all input shapes.
Return type:
shape (torch.Size)
Raises:
RuntimeError -- If shapes are incompatible. | https://pytorch.org/docs/stable/generated/torch.broadcast_shapes.html | pytorch docs |
torch.nn.functional.conv_transpose1d
torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor
Applies a 1D transposed convolution operator over an input signal
composed of several input planes, sometimes also called
"deconvolution".
This operator supports TensorFloat32.
See "ConvTranspose1d" for details and output shape.
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Parameters:
* input -- input tensor of shape (\text{minibatch} ,
\text{in_channels} , iW) | https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html | pytorch docs |
\text{in_channels} , iW)
* **weight** -- filters of shape (\text{in\_channels} ,
\frac{\text{out\_channels}}{\text{groups}} , kW)
* **bias** -- optional bias of shape (\text{out\_channels}).
Default: None
* **stride** -- the stride of the convolving kernel. Can be a
single number or a tuple "(sW,)". Default: 1
* **padding** -- "dilation * (kernel_size - 1) - padding" zero-
padding will be added to both sides of each dimension in the
input. Can be a single number or a tuple "(padW,)". Default: 0
* **output_padding** -- additional size added to one side of
each dimension in the output shape. Can be a single number or
a tuple "(out_padW)". Default: 0
* **groups** -- split input into groups, \text{in\_channels}
should be divisible by the number of groups. Default: 1
* **dilation** -- the spacing between kernel elements. Can be a
single number or a tuple "(dW,)". Default: 1
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html | pytorch docs |
Examples:
>>> inputs = torch.randn(20, 16, 50)
>>> weights = torch.randn(16, 33, 5)
>>> F.conv_transpose1d(inputs, weights)
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html | pytorch docs |
Hardshrink
class torch.nn.Hardshrink(lambd=0.5)
Applies the Hard Shrinkage (Hardshrink) function element-wise.
Hardshrink is defined as:
\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x >
\lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{
otherwise } \end{cases}
Parameters:
lambd (float) -- the \lambda value for the Hardshrink
formulation. Default: 0.5
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Hardshrink()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Hardshrink.html | pytorch docs |
torch.dot
torch.dot(input, other, *, out=None) -> Tensor
Computes the dot product of two 1D tensors.
Note:
Unlike NumPy's dot, torch.dot intentionally only supports
computing the dot product of two 1D tensors with the same number
of elements.
Parameters:
* input (Tensor) -- first tensor in the dot product, must
be 1D.
* **other** (*Tensor*) -- second tensor in the dot product, must
be 1D.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.dot(torch.tensor([2, 3]), torch.tensor([2, 1]))
tensor(7)
| https://pytorch.org/docs/stable/generated/torch.dot.html | pytorch docs |
torch.cuda.current_device
torch.cuda.current_device()
Returns the index of a currently selected device.
Return type:
int | https://pytorch.org/docs/stable/generated/torch.cuda.current_device.html | pytorch docs |
AdaptiveMaxPool1d
class torch.nn.AdaptiveMaxPool1d(output_size, return_indices=False)
Applies a 1D adaptive max pooling over an input signal composed of
several input planes.
The output size is L_{out}, for any input size. The number of
output features is equal to the number of input planes.
Parameters:
* output_size (Union[int, Tuple[int]]) --
the target output size L_{out}.
* **return_indices** (*bool*) -- if "True", will return the
indices along with the outputs. Useful to pass to
nn.MaxUnpool1d. Default: "False"
Shape:
* Input: (N, C, L_{in}) or (C, L_{in}).
* Output: (N, C, L_{out}) or (C, L_{out}), where
L_{out}=\text{output\_size}.
-[ Examples ]-
target output size of 5
m = nn.AdaptiveMaxPool1d(5)
input = torch.randn(1, 64, 8)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool1d.html | pytorch docs |
torch.Tensor.isnan
Tensor.isnan() -> Tensor
See "torch.isnan()" | https://pytorch.org/docs/stable/generated/torch.Tensor.isnan.html | pytorch docs |
default_per_channel_qconfig
torch.quantization.qconfig.default_per_channel_qconfig
alias of QConfig(activation=functools.partial(, quant_min=0,
quant_max=127){}, weight=functools.partial(,
dtype=torch.qint8, qscheme=torch.per_channel_symmetric){}) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_per_channel_qconfig.html | pytorch docs |
ScriptFunction
class torch.jit.ScriptFunction
Functionally equivalent to a "ScriptModule", but represents a
single function and does not have any attributes or Parameters.
get_debug_state(self: torch._C.ScriptFunction) -> torch._C.GraphExecutorState
save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) -> None
save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) -> bytes | https://pytorch.org/docs/stable/generated/torch.jit.ScriptFunction.html | pytorch docs |
torch.nn.utils.parametrize.cached
torch.nn.utils.parametrize.cached()
Context manager that enables the caching system within
parametrizations registered with "register_parametrization()".
The value of the parametrized objects is computed and cached the
first time they are required when this context manager is active.
The cached values are discarded when leaving the context manager.
This is useful when using a parametrized parameter more than once
in the forward pass. An example of this is when parametrizing the
recurrent kernel of an RNN or when sharing weights.
The simplest way to activate the cache is by wrapping the forward
pass of the neural network
import torch.nn.utils.parametrize as P
...
with P.cached():
output = model(inputs)
in training and evaluation. One may also wrap the parts of the
modules that use several times the parametrized tensors. For | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.cached.html | pytorch docs |
example, the loop of an RNN with a parametrized recurrent kernel:
with P.cached():
for x in xs:
out_rnn = self.rnn_cell(x, out_rnn)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.cached.html | pytorch docs |
LeakyReLU
class torch.ao.nn.quantized.LeakyReLU(scale, zero_point, negative_slope=0.01, inplace=False, device=None, dtype=None)
This is the quantized equivalent of "LeakyReLU".
Parameters:
* scale (float) -- quantization scale of the output tensor
* **zero_point** (*int*) -- quantization zero point of the
output tensor
* **negative_slope** (*float*) -- Controls the angle of the
negative slope. Default: 1e-2
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.LeakyReLU.html | pytorch docs |
torch.reshape
torch.reshape(input, shape) -> Tensor
Returns a tensor with the same data and number of elements as
"input", but with the specified shape. When possible, the returned
tensor will be a view of "input". Otherwise, it will be a copy.
Contiguous inputs and inputs with compatible strides can be
reshaped without copying, but you should not depend on the copying
vs. viewing behavior.
See "torch.Tensor.view()" on when it is possible to return a view.
A single dimension may be -1, in which case it's inferred from the
remaining dimensions and the number of elements in "input".
Parameters:
* input (Tensor) -- the tensor to be reshaped
* **shape** (*tuple of python:int*) -- the new shape
Example:
>>> a = torch.arange(4.)
>>> torch.reshape(a, (2, 2))
tensor([[ 0., 1.],
[ 2., 3.]])
>>> b = torch.tensor([[0, 1], [2, 3]])
>>> torch.reshape(b, (-1,))
tensor([ 0, 1, 2, 3])
| https://pytorch.org/docs/stable/generated/torch.reshape.html | pytorch docs |
torch.get_num_interop_threads
torch.get_num_interop_threads() -> int
Returns the number of threads used for inter-op parallelism on CPU
(e.g. in JIT interpreter) | https://pytorch.org/docs/stable/generated/torch.get_num_interop_threads.html | pytorch docs |
torch.nn.functional.mse_loss
torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor
Measures the element-wise mean squared error.
See "MSELoss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.mse_loss.html | pytorch docs |
torch.Tensor.less_equal
Tensor.less_equal(other) -> Tensor
See "torch.less_equal()". | https://pytorch.org/docs/stable/generated/torch.Tensor.less_equal.html | pytorch docs |