text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
with fp16 accumulation type) are allowed with fp16 GEMMs.
torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction
A "bool" that controls whether reduced precision reductions are
allowed with bf16 GEMMs.
torch.backends.cuda.cufft_plan_cache
"cufft_plan_cache" caches the cuFFT plans
size
A readonly "int" that shows the number of plans currently in the
cuFFT plan cache.
torch.backends.cuda.max_size
A "int" that controls cache capacity of cuFFT plan.
torch.backends.cuda.clear()
Clears the cuFFT plan cache.
torch.backends.cuda.preferred_linalg_library(backend=None)
Warning:
This flag is experimental and subject to change.
When PyTorch runs a CUDA linear algebra operation it often uses the
cuSOLVER or MAGMA libraries, and if both are available it decides
which to use with a heuristic. This flag (a "str") allows
overriding those heuristics.
If "cusolver" is set then cuSOLVER will be used wherever
possible.
| https://pytorch.org/docs/stable/backends.html | pytorch docs |
possible.
If "magma" is set then MAGMA will be used wherever possible.
If "default" (the default) is set then heuristics will be used
to pick between cuSOLVER and MAGMA if both are available.
When no input is given, this function returns the currently
preferred library.
Note: When a library is preferred other libraries may still be used
if the preferred library doesn't implement the operation(s) called.
This flag may achieve better performance if PyTorch's heuristic
library selection is incorrect for your application's inputs.
Currently supported linalg operators:
"torch.linalg.inv()"
"torch.linalg.inv_ex()"
"torch.linalg.cholesky()"
"torch.linalg.cholesky_ex()"
"torch.cholesky_solve()"
"torch.cholesky_inverse()"
"torch.linalg.lu_factor()"
"torch.linalg.lu()"
"torch.linalg.lu_solve()"
"torch.linalg.qr()"
"torch.linalg.eigh()"
"torch.linalg.eighvals()"
"torch.linalg.svd()"
| https://pytorch.org/docs/stable/backends.html | pytorch docs |
"torch.linalg.svd()"
"torch.linalg.svdvals()"
Return type:
_LinalgBackend
class torch.backends.cuda.SDPBackend(value)
Enum class for the scaled dot product attention backends.
Warning:
This flag is experimental and subject to change.'
This class needs to stay inline with the enum defined in:
pytorch/aten/src/ATen/native/transformers/sdp_utils_cpp.h
torch.backends.cuda.flash_sdp_enabled()
Warning:
This flag is experimental and subject to change.
Returns whether flash sdp is enabled or not.
torch.backends.cuda.enable_mem_efficient_sdp(enabled)
Warning:
This flag is experimental and subject to change.
Enables or disables memory efficient sdp.
torch.backends.cuda.mem_efficient_sdp_enabled()
Warning:
This flag is experimental and subject to change.
Returns whether memory efficient sdp is enabled or not.
torch.backends.cuda.enable_flash_sdp(enabled)
Warning:
This flag is experimental and subject to change.
| https://pytorch.org/docs/stable/backends.html | pytorch docs |
Enables or disables flash sdp.
torch.backends.cuda.math_sdp_enabled()
Warning:
This flag is experimental and subject to change.
Returns whether math sdp is enabled or not.
torch.backends.cuda.enable_math_sdp(enabled)
Warning:
This flag is experimental and subject to change.
Enables or disables math sdp.
torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True)
Warning:
This flag is experimental and subject to change.
This context manager can be used to temporarily enable or disable
flash/memory efficient sdp and math sdp. Upon exiting the context
manager, the previous state of the flags will be restored.
torch.backends.cudnn
torch.backends.cudnn.version()
Returns the version of cuDNN
torch.backends.cudnn.is_available()
Returns a bool indicating if CUDNN is currently available.
torch.backends.cudnn.enabled
A "bool" that controls whether cuDNN is enabled. | https://pytorch.org/docs/stable/backends.html | pytorch docs |
torch.backends.cudnn.allow_tf32
A "bool" that controls where TensorFloat-32 tensor cores may be
used in cuDNN convolutions on Ampere or newer GPUs. See
TensorFloat-32(TF32) on Ampere devices.
torch.backends.cudnn.deterministic
A "bool" that, if True, causes cuDNN to only use deterministic
convolution algorithms. See also
"torch.are_deterministic_algorithms_enabled()" and
"torch.use_deterministic_algorithms()".
torch.backends.cudnn.benchmark
A "bool" that, if True, causes cuDNN to benchmark multiple
convolution algorithms and select the fastest.
torch.backends.cudnn.benchmark_limit
A "int" that specifies the maximum number of cuDNN convolution
algorithms to try when torch.backends.cudnn.benchmark is True.
Set benchmark_limit to zero to try every available algorithm.
Note that this setting only affects convolutions dispatched via the
cuDNN v8 API.
torch.backends.mps
torch.backends.mps.is_available() | https://pytorch.org/docs/stable/backends.html | pytorch docs |
torch.backends.mps.is_available()
Returns a bool indicating if MPS is currently available.
Return type:
bool
torch.backends.mps.is_built()
Returns whether PyTorch is built with MPS support. Note that this
doesn't necessarily mean MPS is available; just that if this
PyTorch binary were run a machine with working MPS drivers and
devices, we would be able to use it.
Return type:
bool
torch.backends.mkl
torch.backends.mkl.is_available()
Returns whether PyTorch is built with MKL support.
class torch.backends.mkl.verbose(enable)
On-demand oneMKL verbosing functionality To make it easier to debug
performance issues, oneMKL can dump verbose messages containing
execution information like duration while executing the kernel. The
verbosing functionality can be invoked via an environment variable
named MKL_VERBOSE. However, this methodology dumps messages in
all steps. Those are a large amount of verbose messages. Moreover, | https://pytorch.org/docs/stable/backends.html | pytorch docs |
for investigating the performance issues, generally taking verbose
messages for one single iteration is enough. This on-demand
verbosing functionality makes it possible to control scope for
verbose message dumping. In the following example, verbose messages
will be dumped out for the second inference only.
import torch
model(data)
with torch.backends.mkl.verbose(torch.backends.mkl.VERBOSE_ON):
model(data)
Parameters:
level -- Verbose level - "VERBOSE_OFF": Disable verbosing -
"VERBOSE_ON": Enable verbosing
torch.backends.mkldnn
torch.backends.mkldnn.is_available()
Returns whether PyTorch is built with MKL-DNN support.
class torch.backends.mkldnn.verbose(level)
On-demand oneDNN (former MKL-DNN) verbosing functionality To make
it easier to debug performance issues, oneDNN can dump verbose
messages containing information like kernel size, input data size | https://pytorch.org/docs/stable/backends.html | pytorch docs |
and execution duration while executing the kernel. The verbosing
functionality can be invoked via an environment variable named
DNNL_VERBOSE. However, this methodology dumps messages in all
steps. Those are a large amount of verbose messages. Moreover, for
investigating the performance issues, generally taking verbose
messages for one single iteration is enough. This on-demand
verbosing functionality makes it possible to control scope for
verbose message dumping. In the following example, verbose messages
will be dumped out for the second inference only.
import torch
model(data)
with torch.backends.mkldnn.verbose(torch.backends.mkldnn.VERBOSE_ON):
model(data)
Parameters:
level -- Verbose level - "VERBOSE_OFF": Disable verbosing -
"VERBOSE_ON": Enable verbosing - "VERBOSE_ON_CREATION": Enable
verbosing, including oneDNN kernel creation
torch.backends.openmp
torch.backends.openmp.is_available() | https://pytorch.org/docs/stable/backends.html | pytorch docs |
torch.backends.openmp.is_available()
Returns whether PyTorch is built with OpenMP support.
torch.backends.opt_einsum
torch.backends.opt_einsum.is_available()
Returns a bool indicating if opt_einsum is currently available.
Return type:
bool
torch.backends.opt_einsum.get_opt_einsum()
Returns the opt_einsum package if opt_einsum is currently
available, else None.
Return type:
Any
torch.backends.opt_einsum.enabled
A :class:"bool" that controls whether opt_einsum is enabled ("True"
by default). If so, torch.einsum will use opt_einsum (https
://optimized-einsum.readthedocs.io/en/stable/path_finding.html) if
available to calculate an optimal path of contraction for faster
performance.
If opt_einsum is not available, torch.einsum will fall back to the
default contraction path of left to right.
torch.backends.opt_einsum.strategy
A :class:"str" that specifies which strategies to try when | https://pytorch.org/docs/stable/backends.html | pytorch docs |
"torch.backends.opt_einsum.enabled" is "True". By default,
torch.einsum will try the "auto" strategy, but the "greedy" and
"optimal" strategies are also supported. Note that the "optimal"
strategy is factorial on the number of inputs as it tries all
possible paths. See more details in opt_einsum's docs (https
://optimized-einsum.readthedocs.io/en/stable/path_finding.html).
torch.backends.xeon | https://pytorch.org/docs/stable/backends.html | pytorch docs |
torch.utils.dlpack
torch.utils.dlpack.from_dlpack(ext_tensor) -> Tensor
Converts a tensor from an external library into a "torch.Tensor".
The returned PyTorch tensor will share the memory with the input
tensor (which may have come from another library). Note that in-
place operations will therefore also affect the data of the input
tensor. This may lead to unexpected issues (e.g., other libraries
may have read-only flags or immutable data structures), so the user
should only do this if they know for sure that this is fine.
Parameters:
ext_tensor (object with "dlpack" attribute, or a DLPack
capsule) --
The tensor or DLPack capsule to convert.
If "ext_tensor" is a tensor (or ndarray) object, it must support
the "__dlpack__" protocol (i.e., have a "ext_tensor.__dlpack__"
method). Otherwise "ext_tensor" may be a DLPack capsule, which
is an opaque "PyCapsule" instance, typically produced by a
| https://pytorch.org/docs/stable/dlpack.html | pytorch docs |
"to_dlpack" function or method.
Return type:
Tensor
Examples:
>>> import torch.utils.dlpack
>>> t = torch.arange(4)
# Convert a tensor directly (supported in PyTorch >= 1.10)
>>> t2 = torch.from_dlpack(t)
>>> t2[:2] = -1 # show that memory is shared
>>> t2
tensor([-1, -1, 2, 3])
>>> t
tensor([-1, -1, 2, 3])
# The old-style DLPack usage, with an intermediate capsule object
>>> capsule = torch.utils.dlpack.to_dlpack(t)
>>> capsule
<capsule object "dltensor" at ...>
>>> t3 = torch.from_dlpack(capsule)
>>> t3
tensor([-1, -1, 2, 3])
>>> t3[0] = -9 # now we're sharing memory between 3 tensors
>>> t3
tensor([-9, -1, 2, 3])
>>> t2
tensor([-9, -1, 2, 3])
>>> t
tensor([-9, -1, 2, 3])
torch.utils.dlpack.to_dlpack(tensor) -> PyCapsule
Returns an opaque object (a "DLPack capsule") representing the
tensor.
Note: | https://pytorch.org/docs/stable/dlpack.html | pytorch docs |
tensor.
Note:
"to_dlpack" is a legacy DLPack interface. The capsule it returns
cannot be used for anything in Python other than use it as input
to "from_dlpack". The more idiomatic use of DLPack is to call
"from_dlpack" directly on the tensor object - this works when
that object has a "__dlpack__" method, which PyTorch and most
other libraries indeed have now.
Warning:
Only call "from_dlpack" once per capsule produced with
"to_dlpack". Behavior when a capsule is consumed multiple times
is undefined.
Parameters:
tensor -- a tensor to be exported
The DLPack capsule shares the tensor's memory. | https://pytorch.org/docs/stable/dlpack.html | pytorch docs |
PyTorch Governance | Build + CI
How to Add a New Maintainer
For the person to be a maintainer, a person needs to:
Land at least six commits to the related part of the PyTorch
repository
At least one of these commits must be submitted in the last six
months
To add a qualified person to the maintainers' list, please create a PR
that adds a person to the persons of interests page and merge_rules
files. Current maintainers will cast their votes of support. Decision
criteria for approving the PR: * Not earlier than two business days
passed before merging (ensure the majority of the contributors have
seen it) * PR has the correct label (module: ci) * There are no
objections from the current maintainers * There are at least three net
thumbs up from current maintainers (or all maintainers vote thumbs
up when the module has less than 3 maintainers). | https://pytorch.org/docs/stable/community/build_ci_governance.html | pytorch docs |
Probability distributions - torch.distributions
The "distributions" package contains parameterizable probability
distributions and sampling functions. This allows the construction of
stochastic computation graphs and stochastic gradient estimators for
optimization. This package generally follows the design of the
TensorFlow Distributions package.
It is not possible to directly backpropagate through random samples.
However, there are two main methods for creating surrogate functions
that can be backpropagated through. These are the score function
estimator/likelihood ratio estimator/REINFORCE and the pathwise
derivative estimator. REINFORCE is commonly seen as the basis for
policy gradient methods in reinforcement learning, and the pathwise
derivative estimator is commonly seen in the reparameterization trick
in variational autoencoders. Whilst the score function only requires
the value of samples f(x), the pathwise derivative requires the | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
derivative f'(x). The next sections discuss these two in a
reinforcement learning example. For more details see Gradient
Estimation Using Stochastic Computation Graphs .
Score function
When the probability density function is differentiable with respect
to its parameters, we only need "sample()" and "log_prob()" to
implement REINFORCE:
\Delta\theta = \alpha r \frac{\partial\log
p(a|\pi^\theta(s))}{\partial\theta}
where \theta are the parameters, \alpha is the learning rate, r is the
reward and p(a|\pi^\theta(s)) is the probability of taking action a in
state s given policy \pi^\theta.
In practice we would sample an action from the output of a network,
apply this action in an environment, and then use "log_prob" to
construct an equivalent loss function. Note that we use a negative
because optimizers use gradient descent, whilst the rule above assumes
gradient ascent. With a categorical policy, the code for implementing
REINFORCE would be as follows:
probs = policy_network(state) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
probs = policy_network(state)
# Note that this is equivalent to what used to be called multinomial
m = Categorical(probs)
action = m.sample()
next_state, reward = env.step(action)
loss = -m.log_prob(action) * reward
loss.backward()
Pathwise derivative
The other way to implement these stochastic/policy gradients would be
to use the reparameterization trick from the "rsample()" method, where
the parameterized random variable can be constructed via a
parameterized deterministic function of a parameter-free random
variable. The reparameterized sample therefore becomes differentiable.
The code for implementing the pathwise derivative would be as follows:
params = policy_network(state)
m = Normal(*params)
# Any distribution with .has_rsample == True could work based on the application
action = m.rsample()
next_state, reward = env.step(action) # Assuming that reward is differentiable
loss = -reward
loss.backward()
Distribution | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
loss.backward()
Distribution
class torch.distributions.distribution.Distribution(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)
Bases: "object"
Distribution is the abstract base class for probability
distributions.
property arg_constraints: Dict[str, Constraint]
Returns a dictionary from argument names to "Constraint" objects
that should be satisfied by each argument of this distribution.
Args that are not tensors need not appear in this dict.
property batch_shape: Size
Returns the shape over which parameters are batched.
cdf(value)
Returns the cumulative density/mass function evaluated at
*value*.
Parameters:
**value** (*Tensor*) --
Return type:
*Tensor*
entropy()
Returns entropy of distribution, batched over batch_shape.
Returns:
Tensor of shape batch_shape.
Return type:
*Tensor*
enumerate_support(expand=True) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
enumerate_support(expand=True)
Returns tensor containing all values supported by a discrete
distribution. The result will enumerate over dimension 0, so the
shape of the result will be *(cardinality,) + batch_shape +
event_shape* (where *event_shape = ()* for univariate
distributions).
Note that this enumerates over all batched tensors in lock-step
*[[0, 0], [1, 1], ...]*. With *expand=False*, enumeration
happens along dim 0, but with the remaining batch dimensions
being singleton dimensions, *[[0], [1], ..*.
To iterate over the full Cartesian product use
*itertools.product(m.enumerate_support())*.
Parameters:
**expand** (*bool*) -- whether to expand the support over the
batch dims to match the distribution's *batch_shape*.
Returns:
Tensor iterating over dimension 0.
Return type:
*Tensor*
property event_shape: Size
Returns the shape of a single sample (without batching).
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
expand(batch_shape, _instance=None)
Returns a new distribution instance (or populates an existing
instance provided by a derived class) with batch dimensions
expanded to *batch_shape*. This method calls "expand" on the
distribution's parameters. As such, this does not allocate new
memory for the expanded distribution instance. Additionally,
this does not repeat any args checking or parameter broadcasting
in *__init__.py*, when an instance is first created.
Parameters:
* **batch_shape** (*torch.Size*) -- the desired expanded
size.
* **_instance** -- new instance provided by subclasses that
need to override *.expand*.
Returns:
New distribution instance with batch dimensions expanded to
*batch_size*.
icdf(value)
Returns the inverse cumulative density/mass function evaluated
at *value*.
Parameters:
**value** (*Tensor*) --
Return type:
*Tensor*
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Return type:
Tensor
log_prob(value)
Returns the log of the probability density/mass function
evaluated at *value*.
Parameters:
**value** (*Tensor*) --
Return type:
*Tensor*
property mean: Tensor
Returns the mean of the distribution.
property mode: Tensor
Returns the mode of the distribution.
perplexity()
Returns perplexity of distribution, batched over batch_shape.
Returns:
Tensor of shape batch_shape.
Return type:
*Tensor*
rsample(sample_shape=torch.Size([]))
Generates a sample_shape shaped reparameterized sample or
sample_shape shaped batch of reparameterized samples if the
distribution parameters are batched.
Return type:
*Tensor*
sample(sample_shape=torch.Size([]))
Generates a sample_shape shaped sample or sample_shape shaped
batch of samples if the distribution parameters are batched.
Return type:
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Return type:
Tensor
sample_n(n)
Generates n samples or n batches of samples if the distribution
parameters are batched.
Return type:
*Tensor*
static set_default_validate_args(value)
Sets whether validation is enabled or disabled.
The default behavior mimics Python's "assert" statement:
validation is on by default, but is disabled if Python is run in
optimized mode (via "python -O"). Validation may be expensive,
so you may want to disable it once a model is working.
Parameters:
**value** (*bool*) -- Whether to enable validation.
property stddev: Tensor
Returns the standard deviation of the distribution.
property support: Optional[Any]
Returns a "Constraint" object representing this distribution's
support.
property variance: Tensor
Returns the variance of the distribution.
ExponentialFamily | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
ExponentialFamily
class torch.distributions.exp_family.ExponentialFamily(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)
Bases: "Distribution"
ExponentialFamily is the abstract base class for probability
distributions belonging to an exponential family, whose probability
mass/density function has the form is defined below
p_{F}(x; \theta) = \exp(\langle t(x), \theta\rangle - F(\theta)
+ k(x))
where \theta denotes the natural parameters, t(x) denotes the
sufficient statistic, F(\theta) is the log normalizer function for
a given family and k(x) is the carrier measure.
Note:
This class is an intermediary between the *Distribution* class
and distributions which belong to an exponential family mainly to
check the correctness of the *.entropy()* and analytic KL
divergence methods. We use this class to compute the entropy and
KL divergence using the AD framework and Bregman divergences
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
(courtesy of: Frank Nielsen and Richard Nock, Entropies and
Cross-entropies of Exponential Families).
entropy()
Method to compute the entropy using Bregman divergence of the
log normalizer.
Bernoulli
class torch.distributions.bernoulli.Bernoulli(probs=None, logits=None, validate_args=None)
Bases: "ExponentialFamily"
Creates a Bernoulli distribution parameterized by "probs" or
"logits" (but not both).
Samples are binary (0 or 1). They take the value 1 with
probability p and 0 with probability 1 - p.
Example:
>>> m = Bernoulli(torch.tensor([0.3]))
>>> m.sample() # 30% chance 1; 70% chance 0
tensor([ 0.])
Parameters:
* probs (Number, Tensor) -- the probability of
sampling 1
* **logits** (*Number**, **Tensor*) -- the log-odds of sampling
*1*
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
entropy() | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
entropy()
enumerate_support(expand=True)
expand(batch_shape, _instance=None)
has_enumerate_support = True
log_prob(value)
property logits
property mean
property mode
property param_shape
property probs
sample(sample_shape=torch.Size([]))
support = Boolean()
property variance
Beta
class torch.distributions.beta.Beta(concentration1, concentration0, validate_args=None)
Bases: "ExponentialFamily"
Beta distribution parameterized by "concentration1" and
"concentration0".
Example:
>>> m = Beta(torch.tensor([0.5]), torch.tensor([0.5]))
>>> m.sample() # Beta distributed with concentration concentration1 and concentration0
tensor([ 0.1046])
Parameters:
* concentration1 (float or Tensor) -- 1st
concentration parameter of the distribution (often referred to
as alpha)
* **concentration0** (*float** or **Tensor*) -- 2nd
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
concentration parameter of the distribution (often referred to
as beta)
arg_constraints = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}
property concentration0
property concentration1
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
rsample(sample_shape=())
support = Interval(lower_bound=0.0, upper_bound=1.0)
property variance
Binomial
class torch.distributions.binomial.Binomial(total_count=1, probs=None, logits=None, validate_args=None)
Bases: "Distribution"
Creates a Binomial distribution parameterized by "total_count" and
either "probs" or "logits" (but not both). "total_count" must be
broadcastable with "probs"/"logits".
Example:
>>> m = Binomial(100, torch.tensor([0 , .2, .8, 1]))
>>> x = m.sample()
tensor([ 0., 22., 71., 100.])
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
tensor([ 0., 22., 71., 100.])
>>> m = Binomial(torch.tensor([[5.], [10.]]), torch.tensor([0.5, 0.8]))
>>> x = m.sample()
tensor([[ 4., 5.],
[ 7., 6.]])
Parameters:
* total_count (int or Tensor) -- number of Bernoulli
trials
* **probs** (*Tensor*) -- Event probabilities
* **logits** (*Tensor*) -- Event log-odds
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0), 'total_count': IntegerGreaterThan(lower_bound=0)}
entropy()
enumerate_support(expand=True)
expand(batch_shape, _instance=None)
has_enumerate_support = True
log_prob(value)
property logits
property mean
property mode
property param_shape
property probs
sample(sample_shape=torch.Size([]))
property support
property variance
Categorical
class torch.distributions.categorical.Categorical(probs=None, logits=None, validate_args=None)
Bases: "Distribution" | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Bases: "Distribution"
Creates a categorical distribution parameterized by either "probs"
or "logits" (but not both).
Note:
It is equivalent to the distribution that "torch.multinomial()"
samples from.
Samples are integers from {0, \ldots, K-1} where K is
"probs.size(-1)".
If probs is 1-dimensional with length-K, each element is the
relative probability of sampling the class at that index.
If probs is N-dimensional, the first N-1 dimensions are treated
as a batch of relative probability vectors.
Note:
The *probs* argument must be non-negative, finite and have a non-
zero sum, and it will be normalized to sum to 1 along the last
dimension. "probs" will return this normalized value. The
*logits* argument will be interpreted as unnormalized log
probabilities and can therefore be any real number. It will
likewise be normalized so that the resulting probabilities sum to
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
1 along the last dimension. "logits" will return this normalized
value.
See also: "torch.multinomial()"
Example:
>>> m = Categorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ]))
>>> m.sample() # equal probability of 0, 1, 2, 3
tensor(3)
Parameters:
* probs (Tensor) -- event probabilities
* **logits** (*Tensor*) -- event log probabilities
(unnormalized)
arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}
entropy()
enumerate_support(expand=True)
expand(batch_shape, _instance=None)
has_enumerate_support = True
log_prob(value)
property logits
property mean
property mode
property param_shape
property probs
sample(sample_shape=torch.Size([]))
property support
property variance
Cauchy
class torch.distributions.cauchy.Cauchy(loc, scale, validate_args=None)
Bases: "Distribution"
Samples from a Cauchy (Lorentz) distribution. The distribution of | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
the ratio of independent normally distributed random variables with
means 0 follows a Cauchy distribution.
Example:
>>> m = Cauchy(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # sample from a Cauchy distribution with loc=0 and scale=1
tensor([ 2.3214])
Parameters:
* loc (float or Tensor) -- mode or median of the
distribution.
* **scale** (*float** or **Tensor*) -- half width at half
maximum.
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(value)
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
support = Real()
property variance
Chi2
class torch.distributions.chi2.Chi2(df, validate_args=None)
Bases: "Gamma"
Creates a Chi-squared distribution parameterized by shape parameter | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
"df". This is exactly equivalent to "Gamma(alpha=0.5*df, beta=0.5)"
Example:
>>> m = Chi2(torch.tensor([1.0]))
>>> m.sample() # Chi2 distributed with shape df=1
tensor([ 0.1046])
Parameters:
df (float or Tensor) -- shape parameter of the
distribution
arg_constraints = {'df': GreaterThan(lower_bound=0.0)}
property df
expand(batch_shape, _instance=None)
ContinuousBernoulli
class torch.distributions.continuous_bernoulli.ContinuousBernoulli(probs=None, logits=None, lims=(0.499, 0.501), validate_args=None)
Bases: "ExponentialFamily"
Creates a continuous Bernoulli distribution parameterized by
"probs" or "logits" (but not both).
The distribution is supported in [0, 1] and parameterized by
'probs' (in (0,1)) or 'logits' (real-valued). Note that, unlike the
Bernoulli, 'probs' does not correspond to a probability and
'logits' does not correspond to log-odds, but the same names are | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
used due to the similarity with the Bernoulli. See [1] for more
details.
Example:
>>> m = ContinuousBernoulli(torch.tensor([0.3]))
>>> m.sample()
tensor([ 0.2538])
Parameters:
* probs (Number, Tensor) -- (0,1) valued parameters
* **logits** (*Number**, **Tensor*) -- real valued parameters
whose sigmoid matches 'probs'
[1] The continuous Bernoulli: fixing a pervasive error in
variational autoencoders, Loaiza-Ganem G and Cunningham JP, NeurIPS
2019. https://arxiv.org/abs/1907.06845
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(value)
log_prob(value)
property logits
property mean
property param_shape
property probs
rsample(sample_shape=torch.Size([]))
sample(sample_shape=torch.Size([]))
property stddev
support = Interval(lower_bound=0.0, upper_bound=1.0) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
property variance
Dirichlet
class torch.distributions.dirichlet.Dirichlet(concentration, validate_args=None)
Bases: "ExponentialFamily"
Creates a Dirichlet distribution parameterized by concentration
"concentration".
Example:
>>> m = Dirichlet(torch.tensor([0.5, 0.5]))
>>> m.sample() # Dirichlet distributed with concentration [0.5, 0.5]
tensor([ 0.1046, 0.8954])
Parameters:
concentration (Tensor) -- concentration parameter of the
distribution (often referred to as alpha)
arg_constraints = {'concentration': IndependentConstraint(GreaterThan(lower_bound=0.0), 1)}
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
rsample(sample_shape=())
support = Simplex()
property variance
Exponential
class torch.distributions.exponential.Exponential(rate, validate_args=None)
Bases: "ExponentialFamily" | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Bases: "ExponentialFamily"
Creates a Exponential distribution parameterized by "rate".
Example:
>>> m = Exponential(torch.tensor([1.0]))
>>> m.sample() # Exponential distributed with rate=1
tensor([ 0.1046])
Parameters:
rate (float or Tensor) -- rate = 1 / scale of the
distribution
arg_constraints = {'rate': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(value)
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
property stddev
support = GreaterThanEq(lower_bound=0.0)
property variance
FisherSnedecor
class torch.distributions.fishersnedecor.FisherSnedecor(df1, df2, validate_args=None)
Bases: "Distribution"
Creates a Fisher-Snedecor distribution parameterized by "df1" and
"df2".
Example:
>>> m = FisherSnedecor(torch.tensor([1.0]), torch.tensor([2.0]))
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
m.sample() # Fisher-Snedecor-distributed with df1=1 and df2=2
tensor([ 0.2453])
Parameters:
* df1 (float or Tensor) -- degrees of freedom
parameter 1
* **df2** (*float** or **Tensor*) -- degrees of freedom
parameter 2
arg_constraints = {'df1': GreaterThan(lower_bound=0.0), 'df2': GreaterThan(lower_bound=0.0)}
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
support = GreaterThan(lower_bound=0.0)
property variance
Gamma
class torch.distributions.gamma.Gamma(concentration, rate, validate_args=None)
Bases: "ExponentialFamily"
Creates a Gamma distribution parameterized by shape "concentration"
and "rate".
Example:
>>> m = Gamma(torch.tensor([1.0]), torch.tensor([1.0]))
>>> m.sample() # Gamma distributed with concentration=1 and rate=1
tensor([ 0.1046])
Parameters: | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
tensor([ 0.1046])
Parameters:
* concentration (float or Tensor) -- shape parameter
of the distribution (often referred to as alpha)
* **rate** (*float** or **Tensor*) -- rate = 1 / scale of the
distribution (often referred to as beta)
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'rate': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
support = GreaterThanEq(lower_bound=0.0)
property variance
Geometric
class torch.distributions.geometric.Geometric(probs=None, logits=None, validate_args=None)
Bases: "Distribution"
Creates a Geometric distribution parameterized by "probs", where
"probs" is the probability of success of Bernoulli trials. It
represents the probability that in k + 1 Bernoulli trials, the | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
first k trials failed, before seeing a success.
Samples are non-negative integers [0, \inf).
Example:
>>> m = Geometric(torch.tensor([0.3]))
>>> m.sample() # underlying Bernoulli has 30% chance 1; 70% chance 0
tensor([ 2.])
Parameters:
* probs (Number, Tensor) -- the probability of
sampling 1. Must be in range (0, 1]
* **logits** (*Number**, **Tensor*) -- the log-odds of sampling
*1*.
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
entropy()
expand(batch_shape, _instance=None)
log_prob(value)
property logits
property mean
property mode
property probs
sample(sample_shape=torch.Size([]))
support = IntegerGreaterThan(lower_bound=0)
property variance
Gumbel
class torch.distributions.gumbel.Gumbel(loc, scale, validate_args=None)
Bases: "TransformedDistribution"
Samples from a Gumbel Distribution.
Examples: | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Examples:
>>> m = Gumbel(torch.tensor([1.0]), torch.tensor([2.0]))
>>> m.sample() # sample from Gumbel distribution with loc=1, scale=2
tensor([ 1.0124])
Parameters:
* loc (float or Tensor) -- Location parameter of the
distribution
* **scale** (*float** or **Tensor*) -- Scale parameter of the
distribution
arg_constraints: Dict[str, constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
entropy()
expand(batch_shape, _instance=None)
log_prob(value)
property mean
property mode
property stddev
support = Real()
property variance
HalfCauchy
class torch.distributions.half_cauchy.HalfCauchy(scale, validate_args=None)
Bases: "TransformedDistribution"
Creates a half-Cauchy distribution parameterized by scale where:
X ~ Cauchy(0, scale)
Y = |X| ~ HalfCauchy(scale)
Example:
>>> m = HalfCauchy(torch.tensor([1.0]))
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
m = HalfCauchy(torch.tensor([1.0]))
>>> m.sample() # half-cauchy distributed with scale=1
tensor([ 2.3214])
Parameters:
scale (float or Tensor) -- scale of the full Cauchy
distribution
arg_constraints: Dict[str, constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(prob)
log_prob(value)
property mean
property mode
property scale
support = GreaterThanEq(lower_bound=0.0)
property variance
HalfNormal
class torch.distributions.half_normal.HalfNormal(scale, validate_args=None)
Bases: "TransformedDistribution"
Creates a half-normal distribution parameterized by scale where:
X ~ Normal(0, scale)
Y = |X| ~ HalfNormal(scale)
Example:
>>> m = HalfNormal(torch.tensor([1.0]))
>>> m.sample() # half-normal distributed with scale=1
tensor([ 0.1046])
Parameters: | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
tensor([ 0.1046])
Parameters:
scale (float or Tensor) -- scale of the full Normal
distribution
arg_constraints: Dict[str, constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(prob)
log_prob(value)
property mean
property mode
property scale
support = GreaterThanEq(lower_bound=0.0)
property variance
Independent
class torch.distributions.independent.Independent(base_distribution, reinterpreted_batch_ndims, validate_args=None)
Bases: "Distribution"
Reinterprets some of the batch dims of a distribution as event
dims.
This is mainly useful for changing the shape of the result of
"log_prob()". For example to create a diagonal Normal distribution
with the same shape as a Multivariate Normal distribution (so they
are interchangeable), you can: | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
are interchangeable), you can:
>>> from torch.distributions.multivariate_normal import MultivariateNormal
>>> from torch.distributions.normal import Normal
>>> loc = torch.zeros(3)
>>> scale = torch.ones(3)
>>> mvn = MultivariateNormal(loc, scale_tril=torch.diag(scale))
>>> [mvn.batch_shape, mvn.event_shape]
[torch.Size([]), torch.Size([3])]
>>> normal = Normal(loc, scale)
>>> [normal.batch_shape, normal.event_shape]
[torch.Size([3]), torch.Size([])]
>>> diagn = Independent(normal, 1)
>>> [diagn.batch_shape, diagn.event_shape]
[torch.Size([]), torch.Size([3])]
Parameters:
* base_distribution
(torch.distributions.distribution.Distribution) -- a base
distribution
* **reinterpreted_batch_ndims** (*int*) -- the number of batch
dims to reinterpret as event dims
arg_constraints: Dict[str, Constraint] = {}
entropy()
enumerate_support(expand=True) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
entropy()
enumerate_support(expand=True)
expand(batch_shape, _instance=None)
property has_enumerate_support
property has_rsample
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
sample(sample_shape=torch.Size([]))
property support
property variance
Kumaraswamy
class torch.distributions.kumaraswamy.Kumaraswamy(concentration1, concentration0, validate_args=None)
Bases: "TransformedDistribution"
Samples from a Kumaraswamy distribution.
Example:
>>> m = Kumaraswamy(torch.tensor([1.0]), torch.tensor([1.0]))
>>> m.sample() # sample from a Kumaraswamy distribution with concentration alpha=1 and beta=1
tensor([ 0.1729])
Parameters:
* concentration1 (float or Tensor) -- 1st
concentration parameter of the distribution (often referred to
as alpha)
* **concentration0** (*float** or **Tensor*) -- 2nd
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
concentration parameter of the distribution (often referred to
as beta)
arg_constraints: Dict[str, constraints.Constraint] = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
property mean
property mode
support = Interval(lower_bound=0.0, upper_bound=1.0)
property variance
LKJCholesky
class torch.distributions.lkj_cholesky.LKJCholesky(dim, concentration=1.0, validate_args=None)
Bases: "Distribution"
LKJ distribution for lower Cholesky factor of correlation matrices.
The distribution is controlled by "concentration" parameter \eta to
make the probability of the correlation matrix M generated from a
Cholesky factor proportional to \det(M)^{\eta - 1}. Because of
that, when "concentration == 1", we have a uniform distribution
over Cholesky factors of correlation matrices:
L ~ LKJCholesky(dim, concentration)
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
L ~ LKJCholesky(dim, concentration)
X = L @ L' ~ LKJCorr(dim, concentration)
Note that this distribution samples the Cholesky factor of
correlation matrices and not the correlation matrices themselves
and thereby differs slightly from the derivations in [1] for the
LKJCorr distribution. For sampling, this uses the Onion method
from [1] Section 3.
Example:
>>> l = LKJCholesky(3, 0.5)
>>> l.sample() # l @ l.T is a sample of a correlation 3x3 matrix
tensor([[ 1.0000, 0.0000, 0.0000],
[ 0.3516, 0.9361, 0.0000],
[-0.1899, 0.4748, 0.8593]])
Parameters:
* dimension (dim) -- dimension of the matrices
* **concentration** (*float** or **Tensor*) --
concentration/shape parameter of the distribution (often
referred to as eta)
References
[1] Generating random correlation matrices based on vines and
extended onion method (2009), Daniel Lewandowski, Dorota | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Kurowicka, Harry Joe. Journal of Multivariate Analysis. 100.
10.1016/j.jmva.2009.04.008
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0)}
expand(batch_shape, _instance=None)
log_prob(value)
sample(sample_shape=torch.Size([]))
support = CorrCholesky()
Laplace
class torch.distributions.laplace.Laplace(loc, scale, validate_args=None)
Bases: "Distribution"
Creates a Laplace distribution parameterized by "loc" and "scale".
Example:
>>> m = Laplace(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # Laplace distributed with loc=0, scale=1
tensor([ 0.1046])
Parameters:
* loc (float or Tensor) -- mean of the distribution
* **scale** (*float** or **Tensor*) -- scale of the distribution
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(value)
log_prob(value) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
icdf(value)
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
property stddev
support = Real()
property variance
LogNormal
class torch.distributions.log_normal.LogNormal(loc, scale, validate_args=None)
Bases: "TransformedDistribution"
Creates a log-normal distribution parameterized by "loc" and
"scale" where:
X ~ Normal(loc, scale)
Y = exp(X) ~ LogNormal(loc, scale)
Example:
>>> m = LogNormal(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # log-normal distributed with mean=0 and stddev=1
tensor([ 0.1046])
Parameters:
* loc (float or Tensor) -- mean of log of distribution
* **scale** (*float** or **Tensor*) -- standard deviation of log
of the distribution
arg_constraints: Dict[str, constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
entropy()
expand(batch_shape, _instance=None)
has_rsample = True | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
has_rsample = True
property loc
property mean
property mode
property scale
support = GreaterThan(lower_bound=0.0)
property variance
LowRankMultivariateNormal
class torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal(loc, cov_factor, cov_diag, validate_args=None)
Bases: "Distribution"
Creates a multivariate normal distribution with covariance matrix
having a low-rank form parameterized by "cov_factor" and
"cov_diag":
covariance_matrix = cov_factor @ cov_factor.T + cov_diag
-[ Example ]-
m = LowRankMultivariateNormal(torch.zeros(2), torch.tensor([[1.], [0.]]), torch.ones(2))
m.sample() # normally distributed with mean=[0,0], cov_factor=[[1],[0]], cov_diag=[1,1]
tensor([-0.2102, -0.5429])
Parameters:
* loc (Tensor) -- mean of the distribution with shape
batch_shape + event_shape
* **cov_factor** (*Tensor*) -- factor part of low-rank form of
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
covariance matrix with shape batch_shape + event_shape +
(rank,)
* **cov_diag** (*Tensor*) -- diagonal part of low-rank form of
covariance matrix with shape *batch_shape + event_shape*
Note:
The computation for determinant and inverse of covariance matrix
is avoided when *cov_factor.shape[1] << cov_factor.shape[0]*
thanks to Woodbury matrix identity and matrix determinant lemma.
Thanks to these formulas, we just need to compute the determinant
and inverse of the small size "capacitance" matrix:
capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor
arg_constraints = {'cov_diag': IndependentConstraint(GreaterThan(lower_bound=0.0), 1), 'cov_factor': IndependentConstraint(Real(), 2), 'loc': IndependentConstraint(Real(), 1)}
property covariance_matrix
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
property precision_matrix | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
property mode
property precision_matrix
rsample(sample_shape=torch.Size([]))
property scale_tril
support = IndependentConstraint(Real(), 1)
property variance
MixtureSameFamily
class torch.distributions.mixture_same_family.MixtureSameFamily(mixture_distribution, component_distribution, validate_args=None)
Bases: "Distribution"
The MixtureSameFamily distribution implements a (batch of)
mixture distribution where all component are from different
parameterizations of the same distribution type. It is
parameterized by a Categorical "selecting distribution" (over k
component) and a component distribution, i.e., a Distribution
with a rightmost batch shape (equal to [k]) which indexes each
(batch of) component.
Examples:
>>> # Construct Gaussian Mixture Model in 1D consisting of 5 equally
>>> # weighted normal distributions
>>> mix = D.Categorical(torch.ones(5,))
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
mix = D.Categorical(torch.ones(5,))
>>> comp = D.Normal(torch.randn(5,), torch.rand(5,))
>>> gmm = MixtureSameFamily(mix, comp)
>>> # Construct Gaussian Mixture Modle in 2D consisting of 5 equally
>>> # weighted bivariate normal distributions
>>> mix = D.Categorical(torch.ones(5,))
>>> comp = D.Independent(D.Normal(
... torch.randn(5,2), torch.rand(5,2)), 1)
>>> gmm = MixtureSameFamily(mix, comp)
>>> # Construct a batch of 3 Gaussian Mixture Models in 2D each
>>> # consisting of 5 random weighted bivariate normal distributions
>>> mix = D.Categorical(torch.rand(3,5))
>>> comp = D.Independent(D.Normal(
... torch.randn(3,5,2), torch.rand(3,5,2)), 1)
>>> gmm = MixtureSameFamily(mix, comp)
Parameters:
* mixture_distribution --
torch.distributions.Categorical-like instance. Manages the
probability of selecting component. The number of categories | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
must match the rightmost batch dimension of the
component_distribution. Must have either scalar
batch_shape or batch_shape matching
component_distribution.batch_shape[:-1]
* **component_distribution** --
*torch.distributions.Distribution*-like instance. Right-most
batch dimension indexes component.
arg_constraints: Dict[str, Constraint] = {}
cdf(x)
property component_distribution
expand(batch_shape, _instance=None)
has_rsample = False
log_prob(x)
property mean
property mixture_distribution
sample(sample_shape=torch.Size([]))
property support
property variance
Multinomial
class torch.distributions.multinomial.Multinomial(total_count=1, probs=None, logits=None, validate_args=None)
Bases: "Distribution"
Creates a Multinomial distribution parameterized by "total_count"
and either "probs" or "logits" (but not both). The innermost | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
dimension of "probs" indexes over categories. All other dimensions
index over batches.
Note that "total_count" need not be specified if only "log_prob()"
is called (see example below)
Note:
The *probs* argument must be non-negative, finite and have a non-
zero sum, and it will be normalized to sum to 1 along the last
dimension. "probs" will return this normalized value. The
*logits* argument will be interpreted as unnormalized log
probabilities and can therefore be any real number. It will
likewise be normalized so that the resulting probabilities sum to
1 along the last dimension. "logits" will return this normalized
value.
"sample()" requires a single shared total_count for all
parameters and samples.
"log_prob()" allows different total_count for each parameter
and sample.
Example:
>>> m = Multinomial(100, torch.tensor([ 1., 1., 1., 1.]))
>>> x = m.sample() # equal probability of 0, 1, 2, 3
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
tensor([ 21., 24., 30., 25.])
>>> Multinomial(probs=torch.tensor([1., 1., 1., 1.])).log_prob(x)
tensor([-4.1338])
Parameters:
* total_count (int) -- number of trials
* **probs** (*Tensor*) -- event probabilities
* **logits** (*Tensor*) -- event log probabilities
(unnormalized)
arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}
entropy()
expand(batch_shape, _instance=None)
log_prob(value)
property logits
property mean
property param_shape
property probs
sample(sample_shape=torch.Size([]))
property support
total_count: int
property variance
MultivariateNormal
class torch.distributions.multivariate_normal.MultivariateNormal(loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)
Bases: "Distribution"
Creates a multivariate normal (also called Gaussian) distribution | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
parameterized by a mean vector and a covariance matrix.
The multivariate normal distribution can be parameterized either in
terms of a positive definite covariance matrix \mathbf{\Sigma} or a
positive definite precision matrix \mathbf{\Sigma}^{-1} or a lower-
triangular matrix \mathbf{L} with positive-valued diagonal entries,
such that \mathbf{\Sigma} = \mathbf{L}\mathbf{L}^\top. This
triangular matrix can be obtained via e.g. Cholesky decomposition
of the covariance.
-[ Example ]-
m = MultivariateNormal(torch.zeros(2), torch.eye(2))
m.sample() # normally distributed with mean=[0,0] and covariance_matrix=I
tensor([-0.2102, -0.5429])
Parameters:
* loc (Tensor) -- mean of the distribution
* **covariance_matrix** (*Tensor*) -- positive-definite
covariance matrix
* **precision_matrix** (*Tensor*) -- positive-definite precision
matrix
* **scale_tril** (*Tensor*) -- lower-triangular factor of
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
covariance, with positive-valued diagonal
Note:
Only one of "covariance_matrix" or "precision_matrix" or
"scale_tril" can be specified.Using "scale_tril" will be more
efficient: all computations internally are based on "scale_tril".
If "covariance_matrix" or "precision_matrix" is passed instead,
it is only used to compute the corresponding lower triangular
matrices using a Cholesky decomposition.
arg_constraints = {'covariance_matrix': PositiveDefinite(), 'loc': IndependentConstraint(Real(), 1), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}
property covariance_matrix
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
property precision_matrix
rsample(sample_shape=torch.Size([]))
property scale_tril
support = IndependentConstraint(Real(), 1)
property variance
NegativeBinomial | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
NegativeBinomial
class torch.distributions.negative_binomial.NegativeBinomial(total_count, probs=None, logits=None, validate_args=None)
Bases: "Distribution"
Creates a Negative Binomial distribution, i.e. distribution of the
number of successful independent and identical Bernoulli trials
before "total_count" failures are achieved. The probability of
success of each Bernoulli trial is "probs".
Parameters:
* total_count (float or Tensor) -- non-negative number
of negative Bernoulli trials to stop, although the
distribution is still valid for real valued count
* **probs** (*Tensor*) -- Event probabilities of success in the
half open interval [0, 1)
* **logits** (*Tensor*) -- Event log-odds for probabilities of
success
arg_constraints = {'logits': Real(), 'probs': HalfOpenInterval(lower_bound=0.0, upper_bound=1.0), 'total_count': GreaterThanEq(lower_bound=0)}
expand(batch_shape, _instance=None) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
expand(batch_shape, _instance=None)
log_prob(value)
property logits
property mean
property mode
property param_shape
property probs
sample(sample_shape=torch.Size([]))
support = IntegerGreaterThan(lower_bound=0)
property variance
Normal
class torch.distributions.normal.Normal(loc, scale, validate_args=None)
Bases: "ExponentialFamily"
Creates a normal (also called Gaussian) distribution parameterized
by "loc" and "scale".
Example:
>>> m = Normal(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # normally distributed with loc=0 and scale=1
tensor([ 0.1046])
Parameters:
* loc (float or Tensor) -- mean of the distribution
(often referred to as mu)
* **scale** (*float** or **Tensor*) -- standard deviation of the
distribution (often referred to as sigma)
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
cdf(value)
entropy() | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(value)
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
sample(sample_shape=torch.Size([]))
property stddev
support = Real()
property variance
OneHotCategorical
class torch.distributions.one_hot_categorical.OneHotCategorical(probs=None, logits=None, validate_args=None)
Bases: "Distribution"
Creates a one-hot categorical distribution parameterized by "probs"
or "logits".
Samples are one-hot coded vectors of size "probs.size(-1)".
Note:
The *probs* argument must be non-negative, finite and have a non-
zero sum, and it will be normalized to sum to 1 along the last
dimension. "probs" will return this normalized value. The
*logits* argument will be interpreted as unnormalized log
probabilities and can therefore be any real number. It will
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
likewise be normalized so that the resulting probabilities sum to
1 along the last dimension. "logits" will return this normalized
value.
See also: "torch.distributions.Categorical()" for specifications of
"probs" and "logits".
Example:
>>> m = OneHotCategorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ]))
>>> m.sample() # equal probability of 0, 1, 2, 3
tensor([ 0., 0., 0., 1.])
Parameters:
* probs (Tensor) -- event probabilities
* **logits** (*Tensor*) -- event log probabilities
(unnormalized)
arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}
entropy()
enumerate_support(expand=True)
expand(batch_shape, _instance=None)
has_enumerate_support = True
log_prob(value)
property logits
property mean
property mode
property param_shape
property probs
sample(sample_shape=torch.Size([]))
support = OneHot()
property variance
Pareto | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
property variance
Pareto
class torch.distributions.pareto.Pareto(scale, alpha, validate_args=None)
Bases: "TransformedDistribution"
Samples from a Pareto Type 1 distribution.
Example:
>>> m = Pareto(torch.tensor([1.0]), torch.tensor([1.0]))
>>> m.sample() # sample from a Pareto distribution with scale=1 and alpha=1
tensor([ 1.5623])
Parameters:
* scale (float or Tensor) -- Scale parameter of the
distribution
* **alpha** (*float** or **Tensor*) -- Shape parameter of the
distribution
arg_constraints: Dict[str, constraints.Constraint] = {'alpha': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}
entropy()
expand(batch_shape, _instance=None)
property mean
property mode
property support
property variance
Poisson
class torch.distributions.poisson.Poisson(rate, validate_args=None)
Bases: "ExponentialFamily" | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Bases: "ExponentialFamily"
Creates a Poisson distribution parameterized by "rate", the rate
parameter.
Samples are nonnegative integers, with a pmf given by
\mathrm{rate}^k \frac{e^{-\mathrm{rate}}}{k!}
Example:
>>> m = Poisson(torch.tensor([4]))
>>> m.sample()
tensor([ 3.])
Parameters:
rate (Number, Tensor) -- the rate parameter
arg_constraints = {'rate': GreaterThanEq(lower_bound=0.0)}
expand(batch_shape, _instance=None)
log_prob(value)
property mean
property mode
sample(sample_shape=torch.Size([]))
support = IntegerGreaterThan(lower_bound=0)
property variance
RelaxedBernoulli
class torch.distributions.relaxed_bernoulli.RelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)
Bases: "TransformedDistribution"
Creates a RelaxedBernoulli distribution, parametrized by
"temperature", and either "probs" or "logits" (but not both). This | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
is a relaxed version of the Bernoulli distribution, so the values
are in (0, 1), and has reparametrizable samples.
Example:
>>> m = RelaxedBernoulli(torch.tensor([2.2]),
... torch.tensor([0.1, 0.2, 0.3, 0.99]))
>>> m.sample()
tensor([ 0.2951, 0.3442, 0.8918, 0.9021])
Parameters:
* temperature (Tensor) -- relaxation temperature
* **probs** (*Number**, **Tensor*) -- the probability of
sampling *1*
* **logits** (*Number**, **Tensor*) -- the log-odds of sampling
*1*
arg_constraints: Dict[str, constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
expand(batch_shape, _instance=None)
has_rsample = True
property logits
property probs
support = Interval(lower_bound=0.0, upper_bound=1.0)
property temperature
LogitRelaxedBernoulli | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
LogitRelaxedBernoulli
class torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)
Bases: "Distribution"
Creates a LogitRelaxedBernoulli distribution parameterized by
"probs" or "logits" (but not both), which is the logit of a
RelaxedBernoulli distribution.
Samples are logits of values in (0, 1). See [1] for more details.
Parameters:
* temperature (Tensor) -- relaxation temperature
* **probs** (*Number**, **Tensor*) -- the probability of
sampling *1*
* **logits** (*Number**, **Tensor*) -- the log-odds of sampling
*1*
[1] The Concrete Distribution: A Continuous Relaxation of Discrete
Random Variables (Maddison et al, 2017)
[2] Categorical Reparametrization with Gumbel-Softmax (Jang et al,
2017)
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
expand(batch_shape, _instance=None)
log_prob(value) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
log_prob(value)
property logits
property param_shape
property probs
rsample(sample_shape=torch.Size([]))
support = Real()
RelaxedOneHotCategorical
class torch.distributions.relaxed_categorical.RelaxedOneHotCategorical(temperature, probs=None, logits=None, validate_args=None)
Bases: "TransformedDistribution"
Creates a RelaxedOneHotCategorical distribution parametrized by
"temperature", and either "probs" or "logits". This is a relaxed
version of the "OneHotCategorical" distribution, so its samples are
on simplex, and are reparametrizable.
Example:
>>> m = RelaxedOneHotCategorical(torch.tensor([2.2]),
... torch.tensor([0.1, 0.2, 0.3, 0.4]))
>>> m.sample()
tensor([ 0.1294, 0.2324, 0.3859, 0.2523])
Parameters:
* temperature (Tensor) -- relaxation temperature
* **probs** (*Tensor*) -- event probabilities
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
logits (Tensor) -- unnormalized log probability for each
event
arg_constraints: Dict[str, constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}
expand(batch_shape, _instance=None)
has_rsample = True
property logits
property probs
support = Simplex()
property temperature
StudentT
class torch.distributions.studentT.StudentT(df, loc=0.0, scale=1.0, validate_args=None)
Bases: "Distribution"
Creates a Student's t-distribution parameterized by degree of
freedom "df", mean "loc" and scale "scale".
Example:
>>> m = StudentT(torch.tensor([2.0]))
>>> m.sample() # Student's t-distributed with degrees of freedom=2
tensor([ 0.1046])
Parameters:
* df (float or Tensor) -- degrees of freedom
* **loc** (*float** or **Tensor*) -- mean of the distribution
* **scale** (*float** or **Tensor*) -- scale of the distribution
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
support = Real()
property variance
TransformedDistribution
class torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms, validate_args=None)
Bases: "Distribution"
Extension of the Distribution class, which applies a sequence of
Transforms to a base distribution. Let f be the composition of
transforms applied:
X ~ BaseDistribution
Y = f(X) ~ TransformedDistribution(BaseDistribution, f)
log p(Y) = log p(X) + log |det (dX/dY)|
Note that the ".event_shape" of a "TransformedDistribution" is the
maximum shape of its base distribution and its transforms, since
transforms can introduce correlations among events. | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
An example for the usage of "TransformedDistribution" would be:
# Building a Logistic Distribution
# X ~ Uniform(0, 1)
# f = a + b * logit(X)
# Y ~ f(X) ~ Logistic(a, b)
base_distribution = Uniform(0, 1)
transforms = [SigmoidTransform().inv, AffineTransform(loc=a, scale=b)]
logistic = TransformedDistribution(base_distribution, transforms)
For more examples, please look at the implementations of "Gumbel",
"HalfCauchy", "HalfNormal", "LogNormal", "Pareto", "Weibull",
"RelaxedBernoulli" and "RelaxedOneHotCategorical"
arg_constraints: Dict[str, Constraint] = {}
cdf(value)
Computes the cumulative distribution function by inverting the
transform(s) and computing the score of the base distribution.
expand(batch_shape, _instance=None)
property has_rsample
icdf(value)
Computes the inverse cumulative distribution function using
transform(s) and computing the score of the base distribution.
log_prob(value) | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
log_prob(value)
Scores the sample by inverting the transform(s) and computing
the score using the score of the base distribution and the log
abs det jacobian.
rsample(sample_shape=torch.Size([]))
Generates a sample_shape shaped reparameterized sample or
sample_shape shaped batch of reparameterized samples if the
distribution parameters are batched. Samples first from base
distribution and applies *transform()* for every transform in
the list.
sample(sample_shape=torch.Size([]))
Generates a sample_shape shaped sample or sample_shape shaped
batch of samples if the distribution parameters are batched.
Samples first from base distribution and applies *transform()*
for every transform in the list.
property support
Uniform
class torch.distributions.uniform.Uniform(low, high, validate_args=None)
Bases: "Distribution"
Generates uniformly distributed random samples from the half-open
interval "[low, high)". | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
interval "[low, high)".
Example:
>>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0]))
>>> m.sample() # uniformly distributed in the range [0.0, 5.0)
tensor([ 2.3418])
Parameters:
* low (float or Tensor) -- lower range (inclusive).
* **high** (*float** or **Tensor*) -- upper range (exclusive).
arg_constraints = {'high': Dependent(), 'low': Dependent()}
cdf(value)
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
icdf(value)
log_prob(value)
property mean
property mode
rsample(sample_shape=torch.Size([]))
property stddev
property support
property variance
VonMises
class torch.distributions.von_mises.VonMises(loc, concentration, validate_args=None)
Bases: "Distribution"
A circular von Mises distribution.
This implementation uses polar coordinates. The "loc" and "value"
args can be any real number (to facilitate unconstrained | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
optimization), but are interpreted as angles modulo 2 pi.
Example::
>>> m = VonMises(torch.tensor([1.0]), torch.tensor([1.0]))
>>> m.sample() # von Mises distributed with loc=1 and concentration=1
tensor([1.9777])
Parameters:
* loc (torch.Tensor) -- an angle in radians.
* **concentration** (*torch.Tensor*) -- concentration parameter
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()}
expand(batch_shape)
has_rsample = False
log_prob(value)
property mean
The provided mean is the circular one.
property mode
sample(sample_shape=torch.Size([]))
The sampling algorithm for the von Mises distribution is based
on the following paper: Best, D. J., and Nicholas I. Fisher.
"Efficient simulation of the von Mises distribution." Applied
Statistics (1979): 152-157.
support = Real()
property variance
The provided variance is the circular one.
Weibull | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Weibull
class torch.distributions.weibull.Weibull(scale, concentration, validate_args=None)
Bases: "TransformedDistribution"
Samples from a two-parameter Weibull distribution.
-[ Example ]-
m = Weibull(torch.tensor([1.0]), torch.tensor([1.0]))
m.sample() # sample from a Weibull distribution with scale=1, concentration=1
tensor([ 0.4784])
Parameters:
* scale (float or Tensor) -- Scale parameter of
distribution (lambda).
* **concentration** (*float** or **Tensor*) -- Concentration
parameter of distribution (k/shape).
arg_constraints: Dict[str, constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}
entropy()
expand(batch_shape, _instance=None)
property mean
property mode
support = GreaterThan(lower_bound=0.0)
property variance
Wishart | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
property variance
Wishart
class torch.distributions.wishart.Wishart(df, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)
Bases: "ExponentialFamily"
Creates a Wishart distribution parameterized by a symmetric
positive definite matrix \Sigma, or its Cholesky decomposition
\mathbf{\Sigma} = \mathbf{L}\mathbf{L}^\top
-[ Example ]-
m = Wishart(torch.eye(2), torch.Tensor([2]))
m.sample() # Wishart distributed with mean=df * I and
# variance(x_ij)=df for i != j and variance(x_ij)=2 * df for i == j
Parameters:
* covariance_matrix (Tensor) -- positive-definite
covariance matrix
* **precision_matrix** (*Tensor*) -- positive-definite precision
matrix
* **scale_tril** (*Tensor*) -- lower-triangular factor of
covariance, with positive-valued diagonal
* **df** (*float** or **Tensor*) -- real-valued parameter larger
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
than the (dimension of Square matrix) - 1
Note:
Only one of "covariance_matrix" or "precision_matrix" or
"scale_tril" can be specified. Using "scale_tril" will be more
efficient: all computations internally are based on "scale_tril".
If "covariance_matrix" or "precision_matrix" is passed instead,
it is only used to compute the corresponding lower triangular
matrices using a Cholesky decomposition.
'torch.distributions.LKJCholesky' is a restricted Wishart
distribution.[1]
References
[1] Wang, Z., Wu, Y. and Chu, H., 2018. On equivalence of the LKJ
distribution and the restricted Wishart distribution. [2] Sawyer,
S., 2007. Wishart Distributions and Inverse-Wishart Sampling. [3]
Anderson, T. W., 2003. An Introduction to Multivariate Statistical
Analysis (3rd ed.). [4] Odell, P. L. & Feiveson, A. H., 1966. A
Numerical Procedure to Generate a SampleCovariance Matrix. JASA, | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
61(313):199-203. [5] Ku, Y.-C. & Bloomfield, P., 2010. Generating
Random Wishart Matrices with Fractional Degrees of Freedom in OX.
arg_constraints = {'covariance_matrix': PositiveDefinite(), 'df': GreaterThan(lower_bound=0), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}
property covariance_matrix
entropy()
expand(batch_shape, _instance=None)
has_rsample = True
log_prob(value)
property mean
property mode
property precision_matrix
rsample(sample_shape=torch.Size([]), max_try_correction=None)
Warning:
In some cases, sampling algorithm based on Bartlett
decomposition may return singular matrix samples. Several
tries to correct singular samples are performed by default,
but it may end up returning singular matrix samples. Singular
samples may return *-inf* values in *.log_prob()*. In those
cases, the user should validate the samples and either fix the
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
value of df or adjust max_try_correction value for
argument in .rsample accordingly.
property scale_tril
support = PositiveDefinite()
property variance
KL Divergence
torch.distributions.kl.kl_divergence(p, q)
Compute Kullback-Leibler divergence KL(p | q) between two
distributions.
KL(p \| q) = \int p(x) \log\frac {p(x)} {q(x)} \,dx
Parameters:
* p (Distribution) -- A "Distribution" object.
* **q** (*Distribution*) -- A "Distribution" object.
Returns:
A batch of KL divergences of shape batch_shape.
Return type:
Tensor
Raises:
NotImplementedError -- If the distribution types have not
been registered via "register_kl()".
KL divergence is currently implemented for the following
distribution pairs:
* "Bernoulli" and "Bernoulli"
* "Bernoulli" and "Poisson"
* "Beta" and "Beta"
* "Beta" and "ContinuousBernoulli"
* "Beta" and "Exponential"
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
"Beta" and "Exponential"
"Beta" and "Gamma"
"Beta" and "Normal"
"Beta" and "Pareto"
"Beta" and "Uniform"
"Binomial" and "Binomial"
"Categorical" and "Categorical"
"Cauchy" and "Cauchy"
"ContinuousBernoulli" and "ContinuousBernoulli"
"ContinuousBernoulli" and "Exponential"
"ContinuousBernoulli" and "Normal"
"ContinuousBernoulli" and "Pareto"
"ContinuousBernoulli" and "Uniform"
"Dirichlet" and "Dirichlet"
"Exponential" and "Beta"
"Exponential" and "ContinuousBernoulli"
"Exponential" and "Exponential"
"Exponential" and "Gamma"
"Exponential" and "Gumbel"
"Exponential" and "Normal"
"Exponential" and "Pareto"
"Exponential" and "Uniform"
"ExponentialFamily" and "ExponentialFamily"
"Gamma" and "Beta"
"Gamma" and "ContinuousBernoulli"
"Gamma" and "Exponential"
"Gamma" and "Gamma"
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
"Gamma" and "Gamma"
"Gamma" and "Gumbel"
"Gamma" and "Normal"
"Gamma" and "Pareto"
"Gamma" and "Uniform"
"Geometric" and "Geometric"
"Gumbel" and "Beta"
"Gumbel" and "ContinuousBernoulli"
"Gumbel" and "Exponential"
"Gumbel" and "Gamma"
"Gumbel" and "Gumbel"
"Gumbel" and "Normal"
"Gumbel" and "Pareto"
"Gumbel" and "Uniform"
"HalfNormal" and "HalfNormal"
"Independent" and "Independent"
"Laplace" and "Beta"
"Laplace" and "ContinuousBernoulli"
"Laplace" and "Exponential"
"Laplace" and "Gamma"
"Laplace" and "Laplace"
"Laplace" and "Normal"
"Laplace" and "Pareto"
"Laplace" and "Uniform"
"LowRankMultivariateNormal" and "LowRankMultivariateNormal"
"LowRankMultivariateNormal" and "MultivariateNormal"
"MultivariateNormal" and "LowRankMultivariateNormal"
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
"MultivariateNormal" and "MultivariateNormal"
"Normal" and "Beta"
"Normal" and "ContinuousBernoulli"
"Normal" and "Exponential"
"Normal" and "Gamma"
"Normal" and "Gumbel"
"Normal" and "Laplace"
"Normal" and "Normal"
"Normal" and "Pareto"
"Normal" and "Uniform"
"OneHotCategorical" and "OneHotCategorical"
"Pareto" and "Beta"
"Pareto" and "ContinuousBernoulli"
"Pareto" and "Exponential"
"Pareto" and "Gamma"
"Pareto" and "Normal"
"Pareto" and "Pareto"
"Pareto" and "Uniform"
"Poisson" and "Bernoulli"
"Poisson" and "Binomial"
"Poisson" and "Poisson"
"TransformedDistribution" and "TransformedDistribution"
"Uniform" and "Beta"
"Uniform" and "ContinuousBernoulli"
"Uniform" and "Exponential"
"Uniform" and "Gamma"
"Uniform" and "Gumbel"
"Uniform" and "Normal"
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
"Uniform" and "Normal"
"Uniform" and "Pareto"
"Uniform" and "Uniform"
torch.distributions.kl.register_kl(type_p, type_q)
Decorator to register a pairwise function with "kl_divergence()".
Usage:
@register_kl(Normal, Normal)
def kl_normal_normal(p, q):
# insert implementation here
Lookup returns the most specific (type,type) match ordered by
subclass. If the match is ambiguous, a RuntimeWarning is raised.
For example to resolve the ambiguous situation:
@register_kl(BaseP, DerivedQ)
def kl_version1(p, q): ...
@register_kl(DerivedP, BaseQ)
def kl_version2(p, q): ...
you should register a third most-specific implementation, e.g.:
register_kl(DerivedP, DerivedQ)(kl_version1) # Break the tie.
Parameters:
* type_p (type) -- A subclass of "Distribution".
* **type_q** (*type*) -- A subclass of "Distribution".
Transforms | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Transforms
class torch.distributions.transforms.AbsTransform(cache_size=0)
Transform via the mapping y = |x|.
class torch.distributions.transforms.AffineTransform(loc, scale, event_dim=0, cache_size=0)
Transform via the pointwise affine mapping y = \text{loc} +
\text{scale} \times x.
Parameters:
* loc (Tensor or float) -- Location parameter.
* **scale** (*Tensor** or **float*) -- Scale parameter.
* **event_dim** (*int*) -- Optional size of *event_shape*. This
should be zero for univariate random variables, 1 for
distributions over vectors, 2 for distributions over matrices,
etc.
class torch.distributions.transforms.CatTransform(tseq, dim=0, lengths=None, cache_size=0)
Transform functor that applies a sequence of transforms tseq
component-wise to each submatrix at dim, of length
lengths[dim], in a way compatible with "torch.cat()".
Example: | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Example:
x0 = torch.cat([torch.range(1, 10), torch.range(1, 10)], dim=0)
x = torch.cat([x0, x0], dim=0)
t0 = CatTransform([ExpTransform(), identity_transform], dim=0, lengths=[10, 10])
t = CatTransform([t0, t0], dim=0, lengths=[20, 20])
y = t(x)
class torch.distributions.transforms.ComposeTransform(parts, cache_size=0)
Composes multiple transforms in a chain. The transforms being
composed are responsible for caching.
Parameters:
* parts (list of "Transform") -- A list of transforms to
compose.
* **cache_size** (*int*) -- Size of cache. If zero, no caching
is done. If one, the latest single value is cached. Only 0 and
1 are supported.
class torch.distributions.transforms.CorrCholeskyTransform(cache_size=0)
Transforms an uncontrained real vector x with length D*(D-1)/2 into
the Cholesky factor of a D-dimension correlation matrix. This
Cholesky factor is a lower triangular matrix with positive | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
diagonals and unit Euclidean norm for each row. The transform is
processed as follows:
1. First we convert x into a lower triangular matrix in row
order.
2. For each row X_i of the lower triangular part, we apply a
*signed* version of class "StickBreakingTransform" to
transform X_i into a unit Euclidean length vector using the
following steps: - Scales into the interval (-1, 1) domain:
r_i = \tanh(X_i). - Transforms into an unsigned domain: z_i =
r_i^2. - Applies s_i = StickBreakingTransform(z_i). -
Transforms back into signed domain: y_i = sign(r_i) *
\sqrt{s_i}.
class torch.distributions.transforms.CumulativeDistributionTransform(distribution, cache_size=0)
Transform via the cumulative distribution function of a probability
distribution.
Parameters:
distribution (Distribution) -- Distribution whose
cumulative distribution function to use for the transformation.
Example: | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Example:
# Construct a Gaussian copula from a multivariate normal.
base_dist = MultivariateNormal(
loc=torch.zeros(2),
scale_tril=LKJCholesky(2).sample(),
)
transform = CumulativeDistributionTransform(Normal(0, 1))
copula = TransformedDistribution(base_dist, [transform])
class torch.distributions.transforms.ExpTransform(cache_size=0)
Transform via the mapping y = \exp(x).
class torch.distributions.transforms.IndependentTransform(base_transform, reinterpreted_batch_ndims, cache_size=0)
Wrapper around another transform to treat
"reinterpreted_batch_ndims"-many extra of the right most dimensions
as dependent. This has no effect on the forward or backward
transforms, but does sum out "reinterpreted_batch_ndims"-many of
the rightmost dimensions in "log_abs_det_jacobian()".
Parameters:
* base_transform ("Transform") -- A base transform.
* **reinterpreted_batch_ndims** (*int*) -- The number of extra
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
rightmost dimensions to treat as dependent.
class torch.distributions.transforms.LowerCholeskyTransform(cache_size=0)
Transform from unconstrained matrices to lower-triangular matrices
with nonnegative diagonal entries.
This is useful for parameterizing positive definite matrices in
terms of their Cholesky factorization.
class torch.distributions.transforms.PositiveDefiniteTransform(cache_size=0)
Transform from unconstrained matrices to positive-definite
matrices.
class torch.distributions.transforms.PowerTransform(exponent, cache_size=0)
Transform via the mapping y = x^{\text{exponent}}.
class torch.distributions.transforms.ReshapeTransform(in_shape, out_shape, cache_size=0)
Unit Jacobian transform to reshape the rightmost part of a tensor.
Note that "in_shape" and "out_shape" must have the same number of
elements, just as for "torch.Tensor.reshape()".
Parameters:
* in_shape (torch.Size) -- The input event shape. | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
out_shape (torch.Size) -- The output event shape.
class torch.distributions.transforms.SigmoidTransform(cache_size=0)
Transform via the mapping y = \frac{1}{1 + \exp(-x)} and x =
\text{logit}(y).
class torch.distributions.transforms.SoftplusTransform(cache_size=0)
Transform via the mapping \text{Softplus}(x) = \log(1 + \exp(x)).
The implementation reverts to the linear function when x > 20.
class torch.distributions.transforms.TanhTransform(cache_size=0)
Transform via the mapping y = \tanh(x).
It is equivalent to "ComposeTransform([AffineTransform(0., 2.),
SigmoidTransform(), AffineTransform(-1., 2.)])" However this
might not be numerically stable, thus it is recommended to use
TanhTransform instead.
Note that one should use cache_size=1 when it comes to NaN/Inf
values.
class torch.distributions.transforms.SoftmaxTransform(cache_size=0)
Transform from unconstrained space to the simplex via y = \exp(x)
then normalizing. | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
then normalizing.
This is not bijective and cannot be used for HMC. However this acts
mostly coordinate-wise (except for the final normalization), and
thus is appropriate for coordinate-wise optimization algorithms.
class torch.distributions.transforms.StackTransform(tseq, dim=0, cache_size=0)
Transform functor that applies a sequence of transforms tseq
component-wise to each submatrix at dim in a way compatible with
"torch.stack()".
Example:
x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1)
t = StackTransform([ExpTransform(), identity_transform], dim=1)
y = t(x)
class torch.distributions.transforms.StickBreakingTransform(cache_size=0)
Transform from unconstrained space to the simplex of one additional
dimension via a stick-breaking process.
This transform arises as an iterated sigmoid transform in a stick-
breaking construction of the Dirichlet distribution: the first | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
logit is transformed via sigmoid to the first probability and the
probability of everything else, and then the process recurses.
This is bijective and appropriate for use in HMC; however it mixes
coordinates together and is less appropriate for optimization.
class torch.distributions.transforms.Transform(cache_size=0)
Abstract class for invertable transformations with computable log
det jacobians. They are primarily used in
"torch.distributions.TransformedDistribution".
Caching is useful for transforms whose inverses are either
expensive or numerically unstable. Note that care must be taken
with memoized values since the autograd graph may be reversed. For
example while the following works with or without caching:
y = t(x)
t.log_abs_det_jacobian(x, y).backward() # x will receive gradients.
However the following will error when caching due to dependency
reversal:
y = t(x)
z = t.inv(y)
grad(z.sum(), [y]) # error because z is x
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
grad(z.sum(), [y]) # error because z is x
Derived classes should implement one or both of "_call()" or
"_inverse()". Derived classes that set bijective=True should also
implement "log_abs_det_jacobian()".
Parameters:
cache_size (int) -- Size of cache. If zero, no caching is
done. If one, the latest single value is cached. Only 0 and 1
are supported.
Variables:
* domain ("Constraint") -- The constraint representing valid
inputs to this transform.
* **codomain** ("Constraint") -- The constraint representing
valid outputs to this transform which are inputs to the
inverse transform.
* **bijective** (*bool*) -- Whether this transform is bijective.
A transform "t" is bijective iff "t.inv(t(x)) == x" and
"t(t.inv(y)) == y" for every "x" in the domain and "y" in the
codomain. Transforms that are not bijective should at least
maintain the weaker pseudoinverse properties "t(t.inv(t(x)) ==
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
t(x)" and "t.inv(t(t.inv(y))) == t.inv(y)".
* **sign** (*int** or **Tensor*) -- For bijective univariate
transforms, this should be +1 or -1 depending on whether
transform is monotone increasing or decreasing.
property inv
Returns the inverse "Transform" of this transform. This should
satisfy "t.inv.inv is t".
property sign
Returns the sign of the determinant of the Jacobian, if
applicable. In general this only makes sense for bijective
transforms.
log_abs_det_jacobian(x, y)
Computes the log det jacobian *log |dy/dx|* given input and
output.
forward_shape(shape)
Infers the shape of the forward computation, given the input
shape. Defaults to preserving shape.
inverse_shape(shape)
Infers the shapes of the inverse computation, given the output
shape. Defaults to preserving shape.
Constraints
The following constraints are implemented:
"constraints.boolean"
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
"constraints.boolean"
"constraints.cat"
"constraints.corr_cholesky"
"constraints.dependent"
"constraints.greater_than(lower_bound)"
"constraints.greater_than_eq(lower_bound)"
"constraints.independent(constraint, reinterpreted_batch_ndims)"
"constraints.integer_interval(lower_bound, upper_bound)"
"constraints.interval(lower_bound, upper_bound)"
"constraints.less_than(upper_bound)"
"constraints.lower_cholesky"
"constraints.lower_triangular"
"constraints.multinomial"
"constraints.nonnegative_integer"
"constraints.one_hot"
"constraints.positive_integer"
"constraints.positive"
"constraints.positive_semidefinite"
"constraints.positive_definite"
"constraints.real_vector"
"constraints.real"
"constraints.simplex"
"constraints.symmetric"
"constraints.stack"
"constraints.square"
"constraints.symmetric"
"constraints.unit_interval"
class torch.distributions.constraints.Constraint
Abstract base class for constraints. | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Abstract base class for constraints.
A constraint object represents a region over which a variable is
valid, e.g. within which a variable can be optimized.
Variables:
* is_discrete (bool) -- Whether constrained space is
discrete. Defaults to False.
* **event_dim** (*int*) -- Number of rightmost dimensions that
together define an event. The "check()" method will remove
this many dimensions when computing validity.
check(value)
Returns a byte tensor of "sample_shape + batch_shape" indicating
whether each event in value satisfies this constraint.
torch.distributions.constraints.cat
alias of "_Cat"
torch.distributions.constraints.dependent_property
alias of "_DependentProperty"
torch.distributions.constraints.greater_than
alias of "_GreaterThan"
torch.distributions.constraints.greater_than_eq
alias of "_GreaterThanEq"
torch.distributions.constraints.independent
alias of "_IndependentConstraint" | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
alias of "_IndependentConstraint"
torch.distributions.constraints.integer_interval
alias of "_IntegerInterval"
torch.distributions.constraints.interval
alias of "_Interval"
torch.distributions.constraints.half_open_interval
alias of "_HalfOpenInterval"
torch.distributions.constraints.less_than
alias of "_LessThan"
torch.distributions.constraints.multinomial
alias of "_Multinomial"
torch.distributions.constraints.stack
alias of "_Stack"
Constraint Registry
PyTorch provides two global "ConstraintRegistry" objects that link
"Constraint" objects to "Transform" objects. These objects both input
constraints and return transforms, but they have different guarantees
on bijectivity.
"biject_to(constraint)" looks up a bijective "Transform" from
"constraints.real" to the given "constraint". The returned
transform is guaranteed to have ".bijective = True" and should
implement ".log_abs_det_jacobian()".
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
implement ".log_abs_det_jacobian()".
"transform_to(constraint)" looks up a not-necessarily bijective
"Transform" from "constraints.real" to the given "constraint". The
returned transform is not guaranteed to implement
".log_abs_det_jacobian()".
The "transform_to()" registry is useful for performing unconstrained
optimization on constrained parameters of probability distributions,
which are indicated by each distribution's ".arg_constraints" dict.
These transforms often overparameterize a space in order to avoid
rotation; they are thus more suitable for coordinate-wise optimization
algorithms like Adam:
loc = torch.zeros(100, requires_grad=True)
unconstrained = torch.zeros(100, requires_grad=True)
scale = transform_to(Normal.arg_constraints['scale'])(unconstrained)
loss = -Normal(loc, scale).log_prob(data).sum()
The "biject_to()" registry is useful for Hamiltonian Monte Carlo,
where samples from a probability distribution with constrained | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
".support" are propagated in an unconstrained space, and algorithms
are typically rotation invariant.:
dist = Exponential(rate)
unconstrained = torch.zeros(100, requires_grad=True)
sample = biject_to(dist.support)(unconstrained)
potential_energy = -dist.log_prob(sample).sum()
Note:
An example where "transform_to" and "biject_to" differ is
"constraints.simplex": "transform_to(constraints.simplex)" returns a
"SoftmaxTransform" that simply exponentiates and normalizes its
inputs; this is a cheap and mostly coordinate-wise operation
appropriate for algorithms like SVI. In contrast,
"biject_to(constraints.simplex)" returns a "StickBreakingTransform"
that bijects its input down to a one-fewer-dimensional space; this a
more expensive less numerically stable transform but is needed for
algorithms like HMC.
The "biject_to" and "transform_to" objects can be extended by user-
defined constraints and transforms using their ".register()" method | https://pytorch.org/docs/stable/distributions.html | pytorch docs |
either as a function on singleton constraints:
transform_to.register(my_constraint, my_transform)
or as a decorator on parameterized constraints:
@transform_to.register(MyConstraintClass)
def my_factory(constraint):
assert isinstance(constraint, MyConstraintClass)
return MyTransform(constraint.param1, constraint.param2)
You can create your own registry by creating a new
"ConstraintRegistry" object.
class torch.distributions.constraint_registry.ConstraintRegistry
Registry to link constraints to transforms.
register(constraint, factory=None)
Registers a "Constraint" subclass in this registry. Usage:
@my_registry.register(MyConstraintClass)
def construct_transform(constraint):
assert isinstance(constraint, MyConstraint)
return MyTransform(constraint.arg_constraints)
Parameters:
* **constraint** (subclass of "Constraint") -- A subclass of
"Constraint", or a singleton object of the desired class.
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
factory (Callable) -- A callable that inputs a
constraint object and returns a "Transform" object.
| https://pytorch.org/docs/stable/distributions.html | pytorch docs |
Named Tensors operator coverage
Please read Named Tensors first for an introduction to named tensors.
This document is a reference for name inference, a process that
defines how named tensors:
use names to provide additional automatic runtime correctness
checks
propagate names from input tensors to output tensors
Below is a list of all operations that are supported with named
tensors and their associated name inference rules.
If you don't see an operation listed here, but it would help your use
case, please search if an issue has already been filed and if not,
file one.
Warning:
The named tensor API is experimental and subject to change.
Supported Operations
^^^^^^^^^^^^^^^^^^^^
+----------------------+----------------------+
| API | Name inference rule |
|======================|======================|
| "Tensor.abs()", | Keeps input names |
| "torch.abs()" | | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.abs()" | |
+----------------------+----------------------+
| "Tensor.abs_()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.acos()", | Keeps input names |
| "torch.acos()" | |
+----------------------+----------------------+
| "Tensor.acos_()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.add()", | Unifies names from |
| "torch.add()" | inputs |
+----------------------+----------------------+
| "Tensor.add_()" | Unifies names from |
| | inputs |
+----------------------+----------------------+
| "Tensor.addmm()", | Contracts away dims |
| "torch.addmm()" | |
+----------------------+----------------------+
| "Tensor.addmm_()" | Contracts away dims |
+----------------------+----------------------+
| "Tensor.addmv()", | Contracts away dims | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.addmv()", | Contracts away dims |
| "torch.addmv()" | |
+----------------------+----------------------+
| "Tensor.addmv_()" | Contracts away dims |
+----------------------+----------------------+
| "Tensor.align_as()" | See documentation |
+----------------------+----------------------+
| "Tensor.align_to()" | See documentation |
+----------------------+----------------------+
| "Tensor.all()", | None |
| "torch.all()" | |
+----------------------+----------------------+
| "Tensor.any()", | None |
| "torch.any()" | |
+----------------------+----------------------+
| "Tensor.asin()", | Keeps input names |
| "torch.asin()" | |
+----------------------+----------------------+
| "Tensor.asin_()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.atan()", | Keeps input names | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.atan()", | Keeps input names |
| "torch.atan()" | |
+----------------------+----------------------+
| "Tensor.atan2()", | Unifies names from |
| "torch.atan2()" | inputs |
+----------------------+----------------------+
| "Tensor.atan2_()" | Unifies names from |
| | inputs |
+----------------------+----------------------+
| "Tensor.atan_()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.bernoulli() | Keeps input names |
| ", | |
| "torch.bernoulli()" | |
+----------------------+----------------------+
| "Tensor.bernoulli_( | None |
| )" | |
+----------------------+----------------------+
| "Tensor.bfloat16()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.bitwise_not | Keeps input names | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |