text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.acos
torch.acos(input, *, out=None) -> Tensor
Computes the inverse cosine of each element in "input".
\text{out}_{i} = \cos^{-1}(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.3348, -0.5889, 0.2005, -0.1584])
>>> torch.acos(a)
tensor([ 1.2294, 2.2004, 1.3690, 1.7298])
| https://pytorch.org/docs/stable/generated/torch.acos.html | pytorch docs |
torch.Tensor.diag_embed
Tensor.diag_embed(offset=0, dim1=- 2, dim2=- 1) -> Tensor
See "torch.diag_embed()" | https://pytorch.org/docs/stable/generated/torch.Tensor.diag_embed.html | pytorch docs |
torch.resolve_conj
torch.resolve_conj(input) -> Tensor
Returns a new tensor with materialized conjugation if "input"'s
conjugate bit is set to True, else returns "input". The output
tensor will always have its conjugate bit set to False.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])
>>> y = x.conj()
>>> y.is_conj()
True
>>> z = y.resolve_conj()
>>> z
tensor([-1 - 1j, -2 - 2j, 3 + 3j])
>>> z.is_conj()
False
| https://pytorch.org/docs/stable/generated/torch.resolve_conj.html | pytorch docs |
torch.Tensor.log10_
Tensor.log10_() -> Tensor
In-place version of "log10()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log10_.html | pytorch docs |
Dropout1d
class torch.nn.Dropout1d(p=0.5, inplace=False)
Randomly zero out entire channels (a channel is a 1D feature map,
e.g., the j-th channel of the i-th sample in the batched input is a
1D tensor \text{input}[i, j]). Each channel will be zeroed out
independently on every forward call with probability "p" using
samples from a Bernoulli distribution.
Usually the input comes from "nn.Conv1d" modules.
As described in the paper Efficient Object Localization Using
Convolutional Networks , if adjacent pixels within feature maps are
strongly correlated (as is normally the case in early convolution
layers) then i.i.d. dropout will not regularize the activations and
will otherwise just result in an effective learning rate decrease.
In this case, "nn.Dropout1d()" will help promote independence
between feature maps and should be used instead.
Parameters:
* p (float, optional) -- probability of an element to
be zero-ed. | https://pytorch.org/docs/stable/generated/torch.nn.Dropout1d.html | pytorch docs |
be zero-ed.
* **inplace** (*bool**, **optional*) -- If set to "True", will
do this operation in-place
Shape:
* Input: (N, C, L) or (C, L).
* Output: (N, C, L) or (C, L) (same shape as input).
Examples:
>>> m = nn.Dropout1d(p=0.2)
>>> input = torch.randn(20, 16, 32)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Dropout1d.html | pytorch docs |
torch.Tensor.asinh_
Tensor.asinh_() -> Tensor
In-place version of "asinh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.asinh_.html | pytorch docs |
torch.Tensor.ormqr
Tensor.ormqr(input2, input3, left=True, transpose=False) -> Tensor
See "torch.ormqr()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ormqr.html | pytorch docs |
MultiplicativeLR
class torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)
Multiply the learning rate of each parameter group by the factor
given in the specified function. When last_epoch=-1, sets initial
lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **lr_lambda** (*function** or **list*) -- A function which
computes a multiplicative factor given an integer parameter
epoch, or a list of such functions, one for each group in
optimizer.param_groups.
* **last_epoch** (*int*) -- The index of last epoch. Default:
-1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
lmbda = lambda epoch: 0.95
scheduler = MultiplicativeLR(optimizer, lr_lambda=lmbda)
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html | pytorch docs |
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer. The learning rate lambda functions will
only be saved if they are callable objects and not if they are
functions or lambdas.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html | pytorch docs |
torch.igamma
torch.igamma(input, other, *, out=None) -> Tensor
Alias for "torch.special.gammainc()". | https://pytorch.org/docs/stable/generated/torch.igamma.html | pytorch docs |
torch.Tensor.div
Tensor.div(value, *, rounding_mode=None) -> Tensor
See "torch.div()" | https://pytorch.org/docs/stable/generated/torch.Tensor.div.html | pytorch docs |
torch.cuda.reset_peak_memory_stats
torch.cuda.reset_peak_memory_stats(device=None)
Resets the "peak" stats tracked by the CUDA memory allocator.
See "memory_stats()" for details. Peak stats correspond to the
"peak" key in each individual stat dict.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.reset_peak_memory_stats.html | pytorch docs |
torch.nn.functional.embedding
torch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)
A simple lookup table that looks up embeddings in a fixed
dictionary and size.
This module is often used to retrieve word embeddings using
indices. The input to the module is a list of indices, and the
embedding matrix, and the output is the corresponding word
embeddings.
See "torch.nn.Embedding" for more details.
Parameters:
* input (LongTensor) -- Tensor containing indices into the
embedding matrix
* **weight** (*Tensor*) -- The embedding matrix with number of
rows equal to the maximum possible index + 1, and number of
columns equal to the embedding size
* **padding_idx** (*int**, **optional*) -- If specified, the
entries at "padding_idx" do not contribute to the gradient;
| https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html | pytorch docs |
therefore, the embedding vector at "padding_idx" is not
updated during training, i.e. it remains as a fixed "pad".
* **max_norm** (*float**, **optional*) -- If given, each
embedding vector with norm larger than "max_norm" is
renormalized to have norm "max_norm". Note: this will modify
"weight" in-place.
* **norm_type** (*float**, **optional*) -- The p of the p-norm
to compute for the "max_norm" option. Default "2".
* **scale_grad_by_freq** (*bool**, **optional*) -- If given,
this will scale gradients by the inverse of frequency of the
words in the mini-batch. Default "False".
* **sparse** (*bool**, **optional*) -- If "True", gradient
w.r.t. "weight" will be a sparse tensor. See Notes under
"torch.nn.Embedding" for more details regarding sparse
gradients.
Return type:
Tensor
Shape:
* Input: LongTensor of arbitrary shape containing the indices to
extract | https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html | pytorch docs |
extract
* Weight: Embedding matrix of floating point type with shape
*(V, embedding_dim)*, where V = maximum index + 1 and
embedding_dim = the embedding size
* Output: *(*, embedding_dim)*, where *** is the input shape
Examples:
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.tensor([[1, 2, 4, 5], [4, 3, 2, 9]])
>>> # an embedding matrix containing 10 tensors of size 3
>>> embedding_matrix = torch.rand(10, 3)
>>> F.embedding(input, embedding_matrix)
tensor([[[ 0.8490, 0.9625, 0.6753],
[ 0.9666, 0.7761, 0.6108],
[ 0.6246, 0.9751, 0.3618],
[ 0.4161, 0.2419, 0.7383]],
[[ 0.6246, 0.9751, 0.3618],
[ 0.0237, 0.7794, 0.0528],
[ 0.9666, 0.7761, 0.6108],
[ 0.3385, 0.8612, 0.1867]]])
>>> # example with padding_idx
>>> weights = torch.rand(10, 3)
>>> weights[0, :].zero_()
| https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html | pytorch docs |
weights[0, :].zero_()
>>> embedding_matrix = weights
>>> input = torch.tensor([[0, 2, 0, 5]])
>>> F.embedding(input, embedding_matrix, padding_idx=0)
tensor([[[ 0.0000, 0.0000, 0.0000],
[ 0.5609, 0.5384, 0.8720],
[ 0.0000, 0.0000, 0.0000],
[ 0.6262, 0.2438, 0.7471]]])
| https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html | pytorch docs |
torch.fft.hfft
torch.fft.hfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor
Computes the one dimensional discrete Fourier transform of a
Hermitian symmetric "input" signal.
Note:
"hfft()"/"ihfft()" are analogous to "rfft()"/"irfft()". The real
FFT expects a real signal in the time-domain and gives a
Hermitian symmetry in the frequency-domain. The Hermitian FFT is
the opposite; Hermitian symmetric in the time-domain and real-
valued in the frequency-domain. For this reason, special care
needs to be taken with the length argument "n", in the same way
as with "irfft()".
Note:
Because the signal is Hermitian in the time-domain, the result
will be real in the frequency domain. Note that some input
frequencies must be real-valued to satisfy the Hermitian
property. In these cases the imaginary component will be ignored.
For example, any imaginary component in "input[0]" would result
| https://pytorch.org/docs/stable/generated/torch.fft.hfft.html | pytorch docs |
in one or more complex frequency terms which cannot be
represented in a real output and so will always be ignored.
Note:
The correct interpretation of the Hermitian input depends on the
length of the original data, as given by "n". This is because
each input shape could correspond to either an odd or even length
signal. By default, the signal is assumed to be even length and
odd signals will not round-trip properly. So, it is recommended
to always pass the signal length "n".
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimension. With default arguments,
size of the transformed dimension should be (2^n + 1) as argument
*n* defaults to even output size = 2 * (transformed_dim_size - 1)
Parameters:
* input (Tensor) -- the input tensor representing a half-
Hermitian signal | https://pytorch.org/docs/stable/generated/torch.fft.hfft.html | pytorch docs |
Hermitian signal
* **n** (*int**, **optional*) -- Output signal length. This
determines the length of the real output. If given, the input
will either be zero-padded or trimmed to this length before
computing the Hermitian FFT. Defaults to even output:
"n=2*(input.size(dim) - 1)".
* **dim** (*int**, **optional*) -- The dimension along which to
take the one dimensional Hermitian FFT.
* **norm** (*str**, **optional*) --
Normalization mode. For the forward transform ("hfft()"),
these correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the Hermitian
FFT orthonormal)
Calling the backward transform ("ihfft()") with the same
normalization mode will apply an overall normalization of
"1/n" between the two transforms. This is required to make
"ihfft()" the exact inverse.
| https://pytorch.org/docs/stable/generated/torch.fft.hfft.html | pytorch docs |
"ihfft()" the exact inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
Taking a real-valued frequency signal and bringing it into the time
domain gives Hermitian symmetric output:
t = torch.linspace(0, 1, 5)
t
tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])
T = torch.fft.ifft(t)
T
tensor([ 0.5000-0.0000j, -0.1250-0.1720j, -0.1250-0.0406j, -0.1250+0.0406j,
-0.1250+0.1720j])
Note that "T[1] == T[-1].conj()" and "T[2] == T[-2].conj()" is
redundant. We can thus compute the forward transform without
considering negative frequencies:
torch.fft.hfft(T[:3], n=5)
tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])
Like with "irfft()", the output length must be given in order to
recover an even length output:
torch.fft.hfft(T[:3])
tensor([0.1250, 0.2809, 0.6250, 0.9691])
| https://pytorch.org/docs/stable/generated/torch.fft.hfft.html | pytorch docs |
UpsamplingNearest2d
class torch.nn.UpsamplingNearest2d(size=None, scale_factor=None)
Applies a 2D nearest neighbor upsampling to an input signal
composed of several input channels.
To specify the scale, it takes either the "size" or the
"scale_factor" as it's constructor argument.
When "size" is given, it is the output size of the image (h, w).
Parameters:
* size (int or Tuple[int, int],
optional) -- output spatial sizes
* **scale_factor** (*float** or **Tuple**[**float**,
**float**]**, **optional*) -- multiplier for spatial size.
Warning:
This class is deprecated in favor of "interpolate()".
Shape:
* Input: (N, C, H_{in}, W_{in})
* Output: (N, C, H_{out}, W_{out}) where
H_{out} = \left\lfloor H_{in} \times \text{scale\_factor}
\right\rfloor
W_{out} = \left\lfloor W_{in} \times \text{scale\_factor}
\right\rfloor
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html | pytorch docs |
\right\rfloor
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)
>>> input
tensor([[[[1., 2.],
[3., 4.]]]])
>>> m = nn.UpsamplingNearest2d(scale_factor=2)
>>> m(input)
tensor([[[[1., 1., 2., 2.],
[1., 1., 2., 2.],
[3., 3., 4., 4.],
[3., 3., 4., 4.]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html | pytorch docs |
torch.Tensor.abs_
Tensor.abs_() -> Tensor
In-place version of "abs()" | https://pytorch.org/docs/stable/generated/torch.Tensor.abs_.html | pytorch docs |
torch.Tensor.asinh
Tensor.asinh() -> Tensor
See "torch.asinh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.asinh.html | pytorch docs |
torch.subtract
torch.subtract(input, other, *, alpha=1, out=None) -> Tensor
Alias for "torch.sub()". | https://pytorch.org/docs/stable/generated/torch.subtract.html | pytorch docs |
quantize_dynamic
class torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False)
Converts a float model to dynamic (i.e. weights-only) quantized
model.
Replaces specified modules with dynamic weight-only quantized
versions and output the quantized model.
For simplest usage provide dtype argument that can be float16 or
qint8. Weight-only quantization by default is performed for layers
with large weights size - i.e. Linear and RNN variants.
Fine grained control is possible with qconfig and mapping that
act similarly to quantize(). If qconfig is provided, the
dtype argument is ignored.
Parameters:
* model -- input model
* **qconfig_spec** --
Either:
* A dictionary that maps from name or type of submodule to
quantization configuration, qconfig applies to all
submodules of a given module unless qconfig for the
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html | pytorch docs |
submodules are specified (when the submodule already has
qconfig attribute). Entries in the dictionary need to be
QConfig instances.
* A set of types and/or submodule names to apply dynamic
quantization to, in which case the *dtype* argument is used
to specify the bit-width
* **inplace** -- carry out model transformations in-place, the
original module is mutated
* **mapping** -- maps type of a submodule to a type of
corresponding dynamically quantized version with which the
submodule needs to be replaced
| https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html | pytorch docs |
torch.Tensor.istft
Tensor.istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False)
See "torch.istft()" | https://pytorch.org/docs/stable/generated/torch.Tensor.istft.html | pytorch docs |
torch.concatenate
torch.concatenate(tensors, axis=0, out=None) -> Tensor
Alias of "torch.cat()". | https://pytorch.org/docs/stable/generated/torch.concatenate.html | pytorch docs |
ConvTranspose1d
class torch.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
Applies a 1D transposed convolution operator over an input image
composed of several input planes.
This module can be seen as the gradient of Conv1d with respect to
its input. It is also known as a fractionally-strided convolution
or a deconvolution (although it is not an actual deconvolution
operation as it does not compute a true inverse of convolution).
For more information, see the visualizations here and the
Deconvolutional Networks paper.
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
"stride" controls the stride for the cross-correlation.
"padding" controls the amount of implicit zero padding on both
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html | pytorch docs |
sides for "dilation * (kernel_size - 1) - padding" number of
points. See note below for details.
"output_padding" controls the additional size added to one side
of the output shape. See note below for details.
"dilation" controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but the
link here has a nice visualization of what "dilation" does.
"groups" controls the connections between inputs and outputs.
"in_channels" and "out_channels" must both be divisible by
"groups". For example,
* At groups=1, all inputs are convolved to all outputs.
* At groups=2, the operation becomes equivalent to having two
conv layers side by side, each seeing half the input
channels and producing half the output channels, and both
subsequently concatenated.
* At groups= "in_channels", each input channel is convolved
with its own set of filters (of size
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html | pytorch docs |
with its own set of filters (of size
\frac{\text{out_channels}}{\text{in_channels}}).
Note:
The "padding" argument effectively adds "dilation * (kernel_size
- 1) - padding" amount of zero padding to both sizes of the
input. This is set so that when a "Conv1d" and a
"ConvTranspose1d" are initialized with same parameters, they are
inverses of each other in regard to the input and output shapes.
However, when "stride > 1", "Conv1d" maps multiple input shapes
to the same output shape. "output_padding" is provided to resolve
this ambiguity by effectively increasing the calculated output
shape on one side. Note that "output_padding" is only used to
find output shape, but does not actually add zero-padding to
output.
Note:
In some circumstances when using the CUDA backend with CuDNN,
this operator may select a nondeterministic algorithm to increase
performance. If this is undesirable, you can try to make the
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html | pytorch docs |
operation deterministic (potentially at a performance cost) by
setting "torch.backends.cudnn.deterministic = True". Please see
the notes on Reproducibility for background.
Parameters:
* in_channels (int) -- Number of channels in the input
image
* **out_channels** (*int*) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- "dilation *
(kernel_size - 1) - padding" zero-padding will be added to
both sides of the input. Default: 0
* **output_padding** (*int** or **tuple**, **optional*) --
Additional size added to one side of the output shape.
Default: 0
* **groups** (*int**, **optional*) -- Number of blocked
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html | pytorch docs |
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
Shape:
* Input: (N, C_{in}, L_{in}) or (C_{in}, L_{in})
* Output: (N, C_{out}, L_{out}) or (C_{out}, L_{out}), where
L_{out} = (L_{in} - 1) \times \text{stride} - 2 \times
\text{padding} + \text{dilation} \times
(\text{kernel\_size} - 1) + \text{output\_padding} + 1
Variables:
* weight (Tensor) -- the learnable weights of the module
of shape (\text{in_channels},
\frac{\text{out_channels}}{\text{groups}},
\text{kernel_size}). The values of these weights are sampled
from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{groups}{C_\text{out} * \text{kernel_size}} | https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html | pytorch docs |
bias (Tensor) -- the learnable bias of the module of
shape (out_channels). If "bias" is "True", then the values of
these weights are sampled from \mathcal{U}(-\sqrt{k},
\sqrt{k}) where k = \frac{groups}{C_\text{out} *
\text{kernel_size}}
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html | pytorch docs |
torch.hypot
torch.hypot(input, other, *, out=None) -> Tensor
Given the legs of a right triangle, return its hypotenuse.
\text{out}_{i} = \sqrt{\text{input}_{i}^{2} +
\text{other}_{i}^{2}}
The shapes of "input" and "other" must be broadcastable.
Parameters:
* input (Tensor) -- the first input tensor
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.hypot(torch.tensor([4.0]), torch.tensor([3.0, 4.0, 5.0]))
tensor([5.0000, 5.6569, 6.4031])
| https://pytorch.org/docs/stable/generated/torch.hypot.html | pytorch docs |
torch.Tensor.asin
Tensor.asin() -> Tensor
See "torch.asin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.asin.html | pytorch docs |
torch.Tensor.floor_divide_
Tensor.floor_divide_(value) -> Tensor
In-place version of "floor_divide()" | https://pytorch.org/docs/stable/generated/torch.Tensor.floor_divide_.html | pytorch docs |
torch.Tensor.to_sparse_bsc
Tensor.to_sparse_bsc(blocksize, dense_dim) -> Tensor
Convert a tensor to a block sparse column (BSC) storage format of
given blocksize. If the "self" is strided, then the number of
dense dimensions could be specified, and a hybrid BSC tensor will
be created, with dense_dim dense dimensions and self.dim() - 2 -
dense_dim batch dimension.
Parameters:
* blocksize (list, tuple, "torch.Size", optional) -- Block
size of the resulting BSC tensor. A block size must be a tuple
of length two such that its items evenly divide the two sparse
dimensions.
* **dense_dim** (*int**, **optional*) -- Number of dense
dimensions of the resulting BSC tensor. This argument should
be used only if "self" is a strided tensor, and must be a
value between 0 and dimension of "self" tensor minus two.
Example:
>>> dense = torch.randn(10, 10)
>>> sparse = dense.to_sparse_csr()
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsc.html | pytorch docs |
sparse = dense.to_sparse_csr()
>>> sparse_bsc = sparse.to_sparse_bsc((5, 5))
>>> sparse_bsc.row_indices()
tensor([0, 1, 0, 1])
>>> dense = torch.zeros(4, 3, 1)
>>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1
>>> dense.to_sparse_bsc((2, 1), 1)
tensor(ccol_indices=tensor([0, 1, 2, 3]),
row_indices=tensor([0, 1, 0]),
values=tensor([[[[1.]],
[[1.]]],
[[[1.]],
[[1.]]],
[[[1.]],
[[1.]]]]), size=(4, 3, 1), nnz=3,
layout=torch.sparse_bsc)
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsc.html | pytorch docs |
torch.bartlett_window
torch.bartlett_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Bartlett window function.
w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases}
\frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\
2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\
\end{cases},
where N is the full window size.
The input "window_length" is a positive integer controlling the
returned window size. "periodic" flag determines whether the
returned window trims off the last duplicate value from the
symmetric window and is ready to be used as a periodic window with
functions like "torch.stft()". Therefore, if "periodic" is true,
the N in above formula is in fact \text{window_length} + 1. Also,
we always have "torch.bartlett_window(L, periodic=True)" equal to
"torch.bartlett_window(L + 1, periodic=False)[:-1])".
Note: | https://pytorch.org/docs/stable/generated/torch.bartlett_window.html | pytorch docs |
Note:
If "window_length" =1, the returned window contains a single
value 1.
Parameters:
* window_length (int) -- the size of returned window
* **periodic** (*bool**, **optional*) -- If True, returns a
window to be used as periodic function. If False, return a
symmetric window.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()"). Only floating point
types are supported.
* **layout** ("torch.layout", optional) -- the desired layout of
returned window tensor. Only "torch.strided" (dense layout) is
supported.
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
| https://pytorch.org/docs/stable/generated/torch.bartlett_window.html | pytorch docs |
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Returns:
A 1-D tensor of size (\text{window_length},) containing the
window
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.bartlett_window.html | pytorch docs |
torch.cuda.set_stream
torch.cuda.set_stream(stream)
Sets the current stream.This is a wrapper API to set the stream.
Usage of this function is discouraged in favor of the "stream"
context manager.
Parameters:
stream (Stream) -- selected stream. This function is a no-
op if this argument is "None". | https://pytorch.org/docs/stable/generated/torch.cuda.set_stream.html | pytorch docs |
torch.equal
torch.equal(input, other) -> bool
"True" if two tensors have the same size and elements, "False"
otherwise.
Example:
>>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2]))
True
| https://pytorch.org/docs/stable/generated/torch.equal.html | pytorch docs |
torch.i0
torch.i0(input, *, out=None) -> Tensor
Alias for "torch.special.i0()". | https://pytorch.org/docs/stable/generated/torch.i0.html | pytorch docs |
BasePruningMethod
class torch.nn.utils.prune.BasePruningMethod
Abstract base class for creation of new pruning techniques.
Provides a skeleton for customization requiring the overriding of
methods such as "compute_mask()" and "apply()".
classmethod apply(module, name, args, importance_scores=None, *kwargs)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
* **module** (*nn.Module*) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **args** -- arguments passed on to a subclass of
"BasePruningMethod"
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as module parameter) used
to compute mask for pruning. The values in this tensor
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html | pytorch docs |
indicate the importance of the corresponding elements in
the parameter being pruned. If unspecified or None, the
parameter will be used in its place.
* **kwargs** -- keyword arguments passed on to a subclass of
a "BasePruningMethod"
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
Return type:
pruned_tensor (torch.Tensor)
abstract compute_mask(t, default_mask)
Computes and returns a mask for the input tensor "t". Starting
from a base "default_mask" (which should be a mask of ones if
the tensor has not been pruned yet), generate a random mask to
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html | pytorch docs |
apply on top of the "default_mask" according to the specific
pruning method recipe.
Parameters:
* **t** (*torch.Tensor*) -- tensor representing the
importance scores of the
* **prune.** (*parameter to*) --
* **default_mask** (*torch.Tensor*) -- Base mask from
previous pruning
* **iterations** --
* **is** (*that need to be respected after the new mask*) --
* **t.** (*applied. Same dims as*) --
Returns:
mask to apply to "t", of same dims as "t"
Return type:
mask (torch.Tensor)
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html | pytorch docs |
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
Returns:
pruned version of tensor "t".
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html | pytorch docs |
torch.Tensor.triangular_solve
Tensor.triangular_solve(A, upper=True, transpose=False, unitriangular=False)
See "torch.triangular_solve()" | https://pytorch.org/docs/stable/generated/torch.Tensor.triangular_solve.html | pytorch docs |
torch.Tensor.addcdiv_
Tensor.addcdiv_(tensor1, tensor2, *, value=1) -> Tensor
In-place version of "addcdiv()" | https://pytorch.org/docs/stable/generated/torch.Tensor.addcdiv_.html | pytorch docs |
LinearReLU
class torch.ao.nn.intrinsic.LinearReLU(linear, relu)
This is a sequential container which calls the Linear and ReLU
modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.LinearReLU.html | pytorch docs |
torch.logspace
torch.logspace(start, end, steps, base=10.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Creates a one-dimensional tensor of size "steps" whose values are
evenly spaced from {{\text{{base}}}}^{{\text{{start}}}} to
{{\text{{base}}}}^{{\text{{end}}}}, inclusive, on a logarithmic
scale with base "base". That is, the values are:
(\text{base}^{\text{start}}, \text{base}^{(\text{start} +
\frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \ldots,
\text{base}^{(\text{start} + (\text{steps} - 2) *
\frac{\text{end} - \text{start}}{ \text{steps} - 1})},
\text{base}^{\text{end}})
From PyTorch 1.11 logspace requires the steps argument. Use
steps=100 to restore the previous behavior.
Parameters:
* start (float) -- the starting value for the set of
points
* **end** (*float*) -- the ending value for the set of points
| https://pytorch.org/docs/stable/generated/torch.logspace.html | pytorch docs |
steps (int) -- size of the constructed tensor
base (float, optional) -- base of the logarithm
function. Default: "10.0".
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** (*torch.dtype**, **optional*) -- the data type to
perform the computation in. Default: if None, uses the global
default dtype (see torch.get_default_dtype()) when both
"start" and "end" are real, and corresponding complex dtype
when either is complex.
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
| https://pytorch.org/docs/stable/generated/torch.logspace.html | pytorch docs |
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.logspace(start=-10, end=10, steps=5)
tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
>>> torch.logspace(start=0.1, end=1.0, steps=5)
tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])
>>> torch.logspace(start=0.1, end=1.0, steps=1)
tensor([1.2589])
>>> torch.logspace(start=2, end=2, steps=1, base=2)
tensor([4.0])
| https://pytorch.org/docs/stable/generated/torch.logspace.html | pytorch docs |
max_pool2d
class torch.ao.nn.quantized.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
Applies a 2D max pooling over a quantized input signal composed of
several quantized input planes.
Note:
The input quantization parameters are propagated to the output.
See "MaxPool2d" for details. | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.max_pool2d.html | pytorch docs |
torch.signal.windows.hamming
torch.signal.windows.hamming(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the Hamming window.
The Hamming window is defined as follows:
w_n = \alpha - \beta\ \cos \left( \frac{2 \pi n}{M - 1} \right)
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* sym (bool, optional) -- If False, returns a
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True.
* **alpha** (*float**, **optional*) -- The coefficient \alpha in
the equation above.
* **beta** (*float**, **optional*) -- The coefficient \beta in
| https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html | pytorch docs |
the equation above.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric Hamming window.
>>> torch.signal.windows.hamming(10)
| https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html | pytorch docs |
torch.signal.windows.hamming(10)
tensor([0.0800, 0.1876, 0.4601, 0.7700, 0.9723, 0.9723, 0.7700, 0.4601, 0.1876, 0.0800])
>>> # Generates a periodic Hamming window.
>>> torch.signal.windows.hamming(10, sym=False)
tensor([0.0800, 0.1679, 0.3979, 0.6821, 0.9121, 1.0000, 0.9121, 0.6821, 0.3979, 0.1679])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html | pytorch docs |
torch.fft.fftn
torch.fft.fftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor
Computes the N dimensional discrete Fourier transform of "input".
Note:
The Fourier domain representation of any real signal satisfies
the Hermitian property: "X[i_1, ..., i_n] = conj(X[-i_1, ...,
-i_n])". This function always returns all positive and negative
frequency terms even though, for real inputs, half of these
values are redundant. "rfftn()" returns the more compact one-
sided representation where only the positive frequencies of the
last dimension are returned.
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
| https://pytorch.org/docs/stable/generated/torch.fft.fftn.html | pytorch docs |
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the FFT. If a length "-1" is specified, no padding
is done in that dimension. Default: "s = [input.size(d) for d
in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. Default: all dimensions, or the last "len(s)"
dimensions if "s" is given.
* **norm** (*str**, **optional*) --
Normalization mode. For the forward transform ("fftn()"),
these correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the FFT
orthonormal)
Where "n = prod(s)" is the logical FFT size. Calling the
backward transform ("ifftn()") with the same normalization
mode will apply an overall normalization of "1/n" between the
| https://pytorch.org/docs/stable/generated/torch.fft.fftn.html | pytorch docs |
two transforms. This is required to make "ifftn()" the exact
inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
x = torch.rand(10, 10, dtype=torch.complex64)
fftn = torch.fft.fftn(x)
The discrete Fourier transform is separable, so "fftn()" here is
equivalent to two one-dimensional "fft()" calls:
two_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1)
torch.testing.assert_close(fftn, two_ffts, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.fftn.html | pytorch docs |
torch.atanh
torch.atanh(input, *, out=None) -> Tensor
Returns a new tensor with the inverse hyperbolic tangent of the
elements of "input".
Note:
The domain of the inverse hyperbolic tangent is *(-1, 1)* and
values outside this range will be mapped to "NaN", except for the
values *1* and *-1* for which the output is mapped to *+/-INF*
respectively.
\text{out}_{i} = \tanh^{-1}(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4).uniform_(-1, 1)
>>> a
tensor([ -0.9385, 0.2968, -0.8591, -0.1871 ])
>>> torch.atanh(a)
tensor([ -1.7253, 0.3060, -1.2899, -0.1893 ])
| https://pytorch.org/docs/stable/generated/torch.atanh.html | pytorch docs |
torch.Tensor.mul_
Tensor.mul_(value) -> Tensor
In-place version of "mul()". | https://pytorch.org/docs/stable/generated/torch.Tensor.mul_.html | pytorch docs |
torch.Tensor.logical_or
Tensor.logical_or() -> Tensor
See "torch.logical_or()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_or.html | pytorch docs |
MinMaxObserver
class torch.quantization.observer.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)
Observer module for computing the quantization parameters based on
the running min and max values.
This observer uses the tensor min/max statistics to compute the
quantization parameters. The module records the running minimum and
maximum of incoming tensors, and uses this statistic to compute the
quantization parameters.
Parameters:
* dtype -- dtype argument to the quantize node needed to
implement the reference model spec.
* **qscheme** -- Quantization scheme to be used
* **reduce_range** -- Reduces the range of the quantized data
type by 1 bit
* **quant_min** -- Minimum quantization value. If unspecified,
it will follow the 8-bit setup.
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html | pytorch docs |
it will follow the 8-bit setup.
* **quant_max** -- Maximum quantization value. If unspecified,
it will follow the 8-bit setup.
* **eps** (*Tensor*) -- Epsilon value for float32, Defaults to
*torch.finfo(torch.float32).eps*.
Given running min/max as x_\text{min} and x_\text{max}, scale s and
zero point z are computed as:
The running minimum/maximum x_\text{min/max} is computed as:
\begin{array}{ll} x_\text{min} &= \begin{cases} \min(X) &
\text{if~}x_\text{min} = \text{None} \\
\min\left(x_\text{min}, \min(X)\right) & \text{otherwise}
\end{cases}\\ x_\text{max} &= \begin{cases} \max(X) &
\text{if~}x_\text{max} = \text{None} \\
\max\left(x_\text{max}, \max(X)\right) & \text{otherwise}
\end{cases}\\ \end{array}
where X is the observed tensor.
The scale s and zero point z are then computed as:
\begin{aligned} \text{if Symmetric:}&\\ &s = 2
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html | pytorch docs |
\max(|x_\text{min}|, x_\text{max}) / \left( Q_\text{max}
- Q_\text{min} \right) \ &z = \begin{cases} 0 &
\text{if dtype is qint8} \ 128 & \text{otherwise}
\end{cases}\ \text{Otherwise:}&\ &s = \left(
x_\text{max} - x_\text{min} \right ) / \left(
Q_\text{max} - Q_\text{min} \right ) \ &z =
Q_\text{min} - \text{round}(x_\text{min} / s) \end{aligned}
where Q_\text{min} and Q_\text{max} are the minimum and maximum of
the quantized data type.
Warning:
"dtype" can only take "torch.qint8" or "torch.quint8".
Note:
If the running minimum equals to the running maximum, the scale
and zero_point are set to 1.0 and 0.
calculate_qparams()
Calculates the quantization parameters.
forward(x_orig)
Records the running minimum and maximum of "x".
reset_min_max_vals()
Resets the min/max values.
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html | pytorch docs |
torch.Tensor.is_floating_point
Tensor.is_floating_point() -> bool
Returns True if the data type of "self" is a floating point data
type. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_floating_point.html | pytorch docs |
torch.cosh
torch.cosh(input, *, out=None) -> Tensor
Returns a new tensor with the hyperbolic cosine of the elements of
"input".
\text{out}_{i} = \cosh(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.1632, 1.1835, -0.6979, -0.7325])
>>> torch.cosh(a)
tensor([ 1.0133, 1.7860, 1.2536, 1.2805])
Note:
When "input" is on the CPU, the implementation of torch.cosh may
use the Sleef library, which rounds very large results to
infinity or negative infinity. See here for details.
| https://pytorch.org/docs/stable/generated/torch.cosh.html | pytorch docs |
torch.Tensor.log2_
Tensor.log2_() -> Tensor
In-place version of "log2()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log2_.html | pytorch docs |
torch.msort
torch.msort(input, *, out=None) -> Tensor
Sorts the elements of the "input" tensor along its first dimension
in ascending order by value.
Note:
*torch.msort(t)* is equivalent to *torch.sort(t, dim=0)[0]*. See
also "torch.sort()".
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> t = torch.randn(3, 4)
>>> t
tensor([[-0.1321, 0.4370, -1.2631, -1.1289],
[-2.0527, -1.1250, 0.2275, 0.3077],
[-0.0881, -0.1259, -0.5495, 1.0284]])
>>> torch.msort(t)
tensor([[-2.0527, -1.1250, -1.2631, -1.1289],
[-0.1321, -0.1259, -0.5495, 0.3077],
[-0.0881, 0.4370, 0.2275, 1.0284]])
| https://pytorch.org/docs/stable/generated/torch.msort.html | pytorch docs |
GroupNorm
class torch.ao.nn.quantized.GroupNorm(num_groups, num_channels, weight, bias, scale, zero_point, eps=1e-05, affine=True, device=None, dtype=None)
This is the quantized version of "GroupNorm".
Additional args:
* scale - quantization scale of the output, type: double.
* **zero_point** - quantization zero point of the output, type:
long.
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.GroupNorm.html | pytorch docs |
torch.nn.functional.cosine_similarity
torch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor
Returns cosine similarity between "x1" and "x2", computed along
dim. "x1" and "x2" must be broadcastable to a common shape. "dim"
refers to the dimension in this common shape. Dimension "dim" of
the output is squeezed (see "torch.squeeze()"), resulting in the
output tensor having 1 fewer dimension.
\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert
_2 \cdot \Vert x_2 \Vert _2, \epsilon)}
Supports type promotion.
Parameters:
* x1 (Tensor) -- First input.
* **x2** (*Tensor*) -- Second input.
* **dim** (*int**, **optional*) -- Dimension along which cosine
similarity is computed. Default: 1
* **eps** (*float**, **optional*) -- Small value to avoid
division by zero. Default: 1e-8
Example:
>>> input1 = torch.randn(100, 128)
| https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html | pytorch docs |
input1 = torch.randn(100, 128)
>>> input2 = torch.randn(100, 128)
>>> output = F.cosine_similarity(input1, input2)
>>> print(output)
| https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html | pytorch docs |
torch._foreach_log2
torch._foreach_log2(self: List[Tensor]) -> List[Tensor]
Apply "torch.log2()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_log2.html | pytorch docs |
torch.Tensor.copysign_
Tensor.copysign_(other) -> Tensor
In-place version of "copysign()" | https://pytorch.org/docs/stable/generated/torch.Tensor.copysign_.html | pytorch docs |
torch._foreach_reciprocal
torch._foreach_reciprocal(self: List[Tensor]) -> List[Tensor]
Apply "torch.reciprocal()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_reciprocal.html | pytorch docs |
torch.Tensor.divide
Tensor.divide(value, *, rounding_mode=None) -> Tensor
See "torch.divide()" | https://pytorch.org/docs/stable/generated/torch.Tensor.divide.html | pytorch docs |
torch.signal.windows.general_cosine
torch.signal.windows.general_cosine(M, *, a, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the general cosine window.
The general cosine window is defined as follows:
w_n = \sum^{M-1}_{i=0} (-1)^i a_i \cos{ \left( \frac{2 \pi i
n}{M - 1}\right)}
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* a (Iterable) -- the coefficients associated to each of
the cosine functions.
* **sym** (*bool**, **optional*) -- If *False*, returns a
periodic window suitable for use in spectral analysis. If
*True*, returns a symmetric window suitable for use in filter
design. Default: *True*.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html | pytorch docs |
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric general cosine window with 3 coefficients.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html | pytorch docs |
torch.signal.windows.general_cosine(10, a=[0.46, 0.23, 0.31], sym=True)
tensor([0.5400, 0.3376, 0.1288, 0.4200, 0.9136, 0.9136, 0.4200, 0.1288, 0.3376, 0.5400])
>>> # Generates a periodic general cosine window wit 2 coefficients.
>>> torch.signal.windows.general_cosine(10, a=[0.5, 1 - 0.5], sym=False)
tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html | pytorch docs |
ConvReLU3d
class torch.ao.nn.intrinsic.ConvReLU3d(conv, relu)
This is a sequential container which calls the Conv3d and ReLU
modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU3d.html | pytorch docs |
torch.block_diag
torch.block_diag(*tensors)
Create a block diagonal matrix from provided tensors.
Parameters:
*tensors -- One or more tensors with 0, 1, or 2 dimensions.
Returns:
A 2 dimensional tensor with all the input tensors arranged in
order such that their upper left and lower right corners are
diagonally adjacent. All other elements are set to 0.
Return type:
Tensor
Example:
>>> import torch
>>> A = torch.tensor([[0, 1], [1, 0]])
>>> B = torch.tensor([[3, 4, 5], [6, 7, 8]])
>>> C = torch.tensor(7)
>>> D = torch.tensor([1, 2, 3])
>>> E = torch.tensor([[4], [5], [6]])
>>> torch.block_diag(A, B, C, D, E)
tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 4, 5, 0, 0, 0, 0, 0],
[0, 0, 6, 7, 8, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 7, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 2, 3, 0],
| https://pytorch.org/docs/stable/generated/torch.block_diag.html | pytorch docs |
[0, 0, 0, 0, 0, 0, 1, 2, 3, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 4],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 5],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 6]]) | https://pytorch.org/docs/stable/generated/torch.block_diag.html | pytorch docs |
torch.Tensor.unique
Tensor.unique(sorted=True, return_inverse=False, return_counts=False, dim=None)
Returns the unique elements of the input tensor.
See "torch.unique()" | https://pytorch.org/docs/stable/generated/torch.Tensor.unique.html | pytorch docs |
torch.nn.functional.max_pool3d
torch.nn.functional.max_pool3d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
Applies a 3D max pooling over an input signal composed of several
input planes.
Note:
The order of "ceil_mode" and "return_indices" is different from
what seen in "MaxPool3d", and will change in a future release.
See "MaxPool3d" for details.
Parameters:
* input -- input tensor (\text{minibatch} ,
\text{in_channels} , iD, iH , iW), minibatch dim optional.
* **kernel_size** -- size of the pooling region. Can be a single
number or a tuple *(kT, kH, kW)*
* **stride** -- stride of the pooling operation. Can be a single
number or a tuple *(sT, sH, sW)*. Default: "kernel_size"
* **padding** -- Implicit negative infinity padding to be added
on both sides, must be >= 0 and <= kernel_size / 2.
| https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html | pytorch docs |
dilation -- The stride between elements within a sliding
window, must be > 0.
ceil_mode -- If "True", will use ceil instead of floor
to compute the output shape. This ensures that every element
in the input tensor is covered by a sliding window.
return_indices -- If "True", will return the argmax along
with the max values. Useful for
"torch.nn.functional.max_unpool3d" later
| https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html | pytorch docs |
enable_grad
class torch.enable_grad
Context-manager that enables gradient calculation.
Enables gradient calculation, if it has been disabled via "no_grad"
or "set_grad_enabled".
This context manager is thread local; it will not affect
computation in other threads.
Also functions as a decorator. (Make sure to instantiate with
parenthesis.)
Note:
enable_grad is one of several mechanisms that can enable or
disable gradients locally see Locally disabling gradient
computation for more information on how they compare.
Note:
This API does not apply to forward-mode AD.
Example::
>>> x = torch.tensor([1.], requires_grad=True)
>>> with torch.no_grad():
... with torch.enable_grad():
... y = x * 2
>>> y.requires_grad
True
>>> y.backward()
>>> x.grad
tensor([2.])
>>> @torch.enable_grad()
... def doubler(x):
... return x * 2
>>> with torch.no_grad(): | https://pytorch.org/docs/stable/generated/torch.enable_grad.html | pytorch docs |
with torch.no_grad():
... z = doubler(x)
>>> z.requires_grad
True
| https://pytorch.org/docs/stable/generated/torch.enable_grad.html | pytorch docs |
InstanceNorm2d
class torch.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)
Applies Instance Normalization over a 4D input (a mini-batch of 2D
inputs with additional channel dimension) as described in the paper
Instance Normalization: The Missing Ingredient for Fast
Stylization.
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The mean and standard-deviation are calculated per-dimension
separately for each object in a mini-batch. \gamma and \beta are
learnable parameter vectors of size C (where C is the input
size) if "affine" is "True". The standard-deviation is calculated
via the biased estimator, equivalent to torch.var(input,
unbiased=False).
By default, this layer uses instance statistics computed from input
data in both training and evaluation modes.
If "track_running_stats" is set to "True", during training this | https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html | pytorch docs |
layer keeps running estimates of its computed mean and variance,
which are then used for normalization during evaluation. The
running estimates are kept with a default "momentum" of 0.1.
Note:
This "momentum" argument is different from one used in optimizer
classes and the conventional notion of momentum. Mathematically,
the update rule for running statistics here is \hat{x}_\text{new}
= (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times
x_t, where \hat{x} is the estimated statistic and x_t is the new
observed value.
Note:
"InstanceNorm2d" and "LayerNorm" are very similar, but have some
subtle differences. "InstanceNorm2d" is applied on each channel
of channeled data like RGB images, but "LayerNorm" is usually
applied on entire sample and often in NLP tasks. Additionally,
"LayerNorm" applies elementwise affine transform, while
"InstanceNorm2d" usually don't apply affine transform.
Parameters: | https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html | pytorch docs |
Parameters:
* num_features (int) -- C from an expected input of size
(N, C, H, W) or (C, H, W)
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters,
initialized the same way as done for batch normalization.
Default: "False".
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics and always uses batch statistics in both
training and eval modes. Default: "False"
Shape:
* Input: (N, C, H, W) or (C, H, W)
* Output: (N, C, H, W) or (C, H, W) (same shape as input)
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html | pytorch docs |
Examples:
>>> # Without Learnable Parameters
>>> m = nn.InstanceNorm2d(100)
>>> # With Learnable Parameters
>>> m = nn.InstanceNorm2d(100, affine=True)
>>> input = torch.randn(20, 100, 35, 45)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html | pytorch docs |
torch.foreach_log
torch.foreach_log(self: List[Tensor]) -> None
Apply "torch.log()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_log_.html | pytorch docs |
torch.cholesky
torch.cholesky(input, upper=False, *, out=None) -> Tensor
Computes the Cholesky decomposition of a symmetric positive-
definite matrix A or for batches of symmetric positive-definite
matrices.
If "upper" is "True", the returned matrix "U" is upper-triangular,
and the decomposition has the form:
A = U^TU
If "upper" is "False", the returned matrix "L" is lower-triangular,
and the decomposition has the form:
A = LL^T
If "upper" is "True", and A is a batch of symmetric positive-
definite matrices, then the returned tensor will be composed of
upper-triangular Cholesky factors of each of the individual
matrices. Similarly, when "upper" is "False", the returned tensor
will be composed of lower-triangular Cholesky factors of each of
the individual matrices.
Warning:
"torch.cholesky()" is deprecated in favor of
"torch.linalg.cholesky()" and will be removed in a future PyTorch
| https://pytorch.org/docs/stable/generated/torch.cholesky.html | pytorch docs |
release."L = torch.cholesky(A)" should be replaced with
L = torch.linalg.cholesky(A)
"U = torch.cholesky(A, upper=True)" should be replaced with
U = torch.linalg.cholesky(A).mH
This transform will produce equivalent results for all valid
(symmetric positive definite) inputs.
Parameters:
* input (Tensor) -- the input tensor A of size (*, n, n)
where *** is zero or more batch dimensions consisting of
symmetric positive-definite matrices.
* **upper** (*bool**, **optional*) -- flag that indicates
whether to return a upper or lower triangular matrix. Default:
"False"
Keyword Arguments:
out (Tensor, optional) -- the output matrix
Example:
>>> a = torch.randn(3, 3)
>>> a = a @ a.mT + 1e-3 # make symmetric positive-definite
>>> l = torch.cholesky(a)
>>> a
tensor([[ 2.4112, -0.7486, 1.4551],
[-0.7486, 1.3544, 0.1294],
| https://pytorch.org/docs/stable/generated/torch.cholesky.html | pytorch docs |
[-0.7486, 1.3544, 0.1294],
[ 1.4551, 0.1294, 1.6724]])
>>> l
tensor([[ 1.5528, 0.0000, 0.0000],
[-0.4821, 1.0592, 0.0000],
[ 0.9371, 0.5487, 0.7023]])
>>> l @ l.mT
tensor([[ 2.4112, -0.7486, 1.4551],
[-0.7486, 1.3544, 0.1294],
[ 1.4551, 0.1294, 1.6724]])
>>> a = torch.randn(3, 2, 2) # Example for batched input
>>> a = a @ a.mT + 1e-03 # make symmetric positive-definite
>>> l = torch.cholesky(a)
>>> z = l @ l.mT
>>> torch.dist(z, a)
tensor(2.3842e-07) | https://pytorch.org/docs/stable/generated/torch.cholesky.html | pytorch docs |