RedTachyon
commited on
Commit
•
4d628f2
1
Parent(s):
712d850
Upload folder using huggingface_hub
Browse files- I9jLbsV3mW/10_image_0.png +3 -0
- I9jLbsV3mW/10_image_1.png +3 -0
- I9jLbsV3mW/11_image_0.png +3 -0
- I9jLbsV3mW/1_image_0.png +3 -0
- I9jLbsV3mW/24_image_0.png +3 -0
- I9jLbsV3mW/26_image_0.png +3 -0
- I9jLbsV3mW/26_image_1.png +3 -0
- I9jLbsV3mW/27_image_0.png +3 -0
- I9jLbsV3mW/27_image_1.png +3 -0
- I9jLbsV3mW/28_image_0.png +3 -0
- I9jLbsV3mW/28_image_1.png +3 -0
- I9jLbsV3mW/29_image_0.png +3 -0
- I9jLbsV3mW/29_image_1.png +3 -0
- I9jLbsV3mW/30_image_0.png +3 -0
- I9jLbsV3mW/5_image_0.png +3 -0
- I9jLbsV3mW/7_image_0.png +3 -0
- I9jLbsV3mW/I9jLbsV3mW.md +1210 -0
- I9jLbsV3mW/I9jLbsV3mW_meta.json +25 -0
I9jLbsV3mW/10_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/10_image_1.png
ADDED
Git LFS Details
|
I9jLbsV3mW/11_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/1_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/24_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/26_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/26_image_1.png
ADDED
Git LFS Details
|
I9jLbsV3mW/27_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/27_image_1.png
ADDED
Git LFS Details
|
I9jLbsV3mW/28_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/28_image_1.png
ADDED
Git LFS Details
|
I9jLbsV3mW/29_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/29_image_1.png
ADDED
Git LFS Details
|
I9jLbsV3mW/30_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/5_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/7_image_0.png
ADDED
Git LFS Details
|
I9jLbsV3mW/I9jLbsV3mW.md
ADDED
@@ -0,0 +1,1210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Taming Diffusion Times In Score-Based Generative Models: Trade-Offs And Solutions
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Score-based diffusion models are a class of generative models whose dynamics is described by stochastic differential equations that map noise into data. While recent works have started to lay down a theoretical foundation for these models, an analytical understanding of the role of the diffusion time T is still lacking. Current best practice advocates for a large T
|
8 |
+
to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution; however, a smaller value of T should be preferred for a better approximation of the score-matching objective and higher computational efficiency. Starting from a variational interpretation of diffusion models, in this work we quantify this trade-off, and suggest a new method to improve quality and efficiency of both training and sampling, by adopting smaller diffusion times. Indeed, we show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process. Empirical results support our analysis; for image data, our method is competitive w.r.t. the state-of-the-art, according to standard sample quality metrics and log-likelihood.
|
9 |
+
|
10 |
+
## 1 Introduction
|
11 |
+
|
12 |
+
Diffusion-based generative models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2021c; Vahdat et al., 2021; Kingma et al., 2021; Ho et al., 2020; Song et al., 2021a) have recently gained popularity due to their ability to synthesize high-quality audio (Kong et al., 2021; Lee et al., 2022b), image Dhariwal
|
13 |
+
& Nichol (2021); Nichol & Dhariwal (2021) and other data modalities Tashiro et al. (2021), outperforming known methods based on Generative Adversarial Networks (gans) (Goodfellow et al., 2014), normalizing flows (nfs) (Kingma et al., 2016) or Variational Autoencoders (vaes) and Bayesian autoencoders (baes)
|
14 |
+
(Kingma & Welling, 2014; Tran et al., 2021).
|
15 |
+
|
16 |
+
Diffusion models learn to generate samples from an unknown density p*data* by reversing a *diffusion process* which transforms the distribution of interest into noise. The forward dynamics injects noise into the data following a diffusion process that can be described by a Stochastic Differential Equation (sde) of the form, dxt = f(xt, t)dt + g(t)dwt with x0 ∼ p*data* , (1)
|
17 |
+
where xt is a random variable at time t, f(·, t) is the *drift term*, g(·) is the *diffusion term* and wt is a Wiener process (or Brownian motion). We will also consider a special class of linear sdes, for which the drift term is decomposed as f(xt, t) = α(t)xt and the diffusion term is independent of xt. This class of parameterizations of sdes is known as *affine* and it admits analytic solutions. We denote the time-varying probability density by p(x, t), where by definition p(x, 0) = p*data* (x), and the conditional on the initial condition x0 by p(x, t| x0).
|
18 |
+
|
19 |
+
The forward sde is usually considered for a sufficiently long *diffusion time* T, leading to the density p(x, T).
|
20 |
+
|
21 |
+
In principle, when T → ∞, p(x, T) converges to Gaussian noise, regardless of initial conditions.
|
22 |
+
|
23 |
+
For generative modeling purposes, we are interested in the inverse dynamics of such process, i.e., transforming samples of the noisy distribution p(x, T) into p*data* (x). Formally, such dynamics can be obtained by considering the solutions of the inverse diffusion process (Anderson, 1982),
|
24 |
+
dxt =-−f(xt, t′) + g 2(t
|
25 |
+
′)∇ log p(xt, t′)dt + g(t
|
26 |
+
′)dwt , (2)
|
27 |
+
1 where t
|
28 |
+
′
|
29 |
+
def = T − t, with the inverse dynamics involving a new Wiener process. Given p(x, T) as the initial condition, the solution of Eq. (2) after a *reverse diffusion time* T, will be distributed as p*data* (x). We refer to the density associated to the backward process as q(x, t′)q(x, t). The simulation of the backward process is referred to as *sampling* and, differently from the forward process, this process is not *affine* and a closed form solution is out of reach. PM: in Eq. (2), should we at least say that the Weiner process is in reverse time?
|
30 |
+
|
31 |
+
Practical considerations on diffusion times. In practice, diffusion models are challenging to work with
|
32 |
+
(Song et al., 2021c). Indeed, a direct access to the true *score* function ∇ log p(xt, t) required in the dynamics of the reverse diffusion is unavailable. This can be solved by approximating it with a parametric function sθ(xt, t), e.g., a neural network, which is trained using the following loss function,
|
33 |
+
|
34 |
+
$${\mathcal{L}}(\mathbf{\theta})=\int_{0}^{T}\mathbb{E}_{\sim(\perp)}\lambda(t)\|\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\,|\,\mathbf{x}_{0})\|^{2}\,,$$
|
35 |
+
|
36 |
+
2, (3)
|
37 |
+
$$\mathcal{L}(\mathbf{\theta})=\frac{1}{1+\frac{\mu_{0}}{p(t)}\frac{\mathbf{m}}{\sim(1)^{\Lambda(t)}||\mathbf{s}\mathbf{\theta}(\mathbf{x}_{t},t)-\mathbf{v}_{\mathrm{LOE}}(\mathbf{x}_{t},t+\mathbf{v}_{\mathrm{LO}})||^{2}}},$$
|
38 |
+
|
39 |
+
where λ(t) is a positive weighting factor and the notation E∼(1) means that the expectation is taken with respect to the random process xt in Eq. (1): for a generic function h, E∼(1)[h(xt, x0, t)] = Rh(x, z, t)p(x, t| z)p*data* (z)dxdz. p(t) = U(0, T). This loss, usually referred to as *score matching loss*,
|
40 |
+
is the cost function considered in Song et al. (2021b) (Eq. (4) ). The condition λ(t) = g(t)
|
41 |
+
2, adopted in this work, is referred to a *likelihood reweighting*. Due to the affine property of the drift, the term p(xt, t| x0)
|
42 |
+
is analytically known and normally distributed for all t (expression available in Table 1, and in Särkkä &
|
43 |
+
Solin (2019)). Note also that we will refer to λ as the *likelihood reweighting* factor when λ(t) = g(t)
|
44 |
+
2(Song et al., 2021b). Intuitively, the estimation of the *score* is akin to a denoising objective, which operates in a challenging regime. Later we will quantify precisely the difficulty of learning the *score*, as a function of increasing diffusion times.
|
45 |
+
|
46 |
+
While the forward and reverse diffusion processes are valid for all T, the noise distribution p(x, T) is analytically known only when the diffusion time is T → ∞. To overcome this problem, the common solution is to replace p(x, T) with a simple distribution p*noise* (x) which, for the classes of sdes we consider in this work, is a Gaussian distribution.
|
47 |
+
|
48 |
+
In the literature, the discrepancy between p(x, T) and p*noise* (x)
|
49 |
+
has been neglected, under the informal assumption of a sufficiently large diffusion time. Unfortunately, while this approximation seems a valid approach to simulate and generate samples, the reverse diffusion process starts from a different initial condition q(x, 0) and, as a consequence, it will converge to a solution q(x, T) that is different from the true p*data* (x). Later, we will expand on the error introduced by this approximation, but for illustration purposes Fig. 1 shows quantitatively this behavior for a simple 1D toy example p*data* (x) = πN (1, 0.1 2) + (1 − π)N (3, 0.5 2), with π = 0.3: when T is small, the distribution p*noise* (x) is very different from p(x, T) and samples from q(x, T) exhibit very low likelihood of being generated from p*data* (x).
|
50 |
+
|
51 |
+
$$(3)$$
|
52 |
+
|
53 |
+
![1_image_0.png](1_image_0.png)
|
54 |
+
|
55 |
+
Figure 1: Effect of T on a toy model: low diffusion times are detrimental for sample quality (likelihood of 1024 samples as median and 95 quantile, on 8 random seeds).
|
56 |
+
|
57 |
+
Crucially, Fig. 1 (zoomed region) illustrates an unknown behavior of diffusion models, which we unveil in our analysis. The right balance between efficient *score* estimation and sampling quality can be achieved by diffusion times that are smaller than common best practices. This is a key observation we explore in our work.
|
58 |
+
|
59 |
+
Contributions. An appropriate choice of the diffusion time T is a key factor that impacts training convergence, sampling time and quality. On the one hand, the approximation error introduced by considering initial conditions for the reverse diffusion process drawn from a simple distribution pnoise (x) ̸= p(x, T) increases when T is small. This is why the current best practice is to choose a sufficiently long diffusion time. On the other hand, training convergence of the *score* model sθ(xt, t) becomes more challenging to achieve with a large T, which also imposes extremely high computational costs **both** for training and for sampling. This would suggest to choose a smaller diffusion time. Given the importance of this problem, in this work we set off to study—for the first time—the existence of suitable operating regimes to strike the right balance between computational efficiency and model quality. The main contributions of this work are the following.
|
60 |
+
|
61 |
+
Contribution 1: In § 2 we provide a new characterization of score-based diffusion models, which allows us to obtain a formal understanding of the impact of the diffusion time T. We do so by introducing a novel decomposition of the evidence lower bound (elbo), which emphasizes the roles of (i) the discrepancy between the "ending" distribution of the diffusion and the "starting" distribution of the reverse diffusion processes, and (ii) of the *score* matching objective. This allows us to claim the existence of an optimal diffusion time, and it provides, for the first time, a formal assessment of the current best practice for selecting T. we use an elbo decomposition which allows us to study the impact of the diffusion time T.
|
62 |
+
|
63 |
+
This elbo decomposition emphasizes the roles of (i) the discrepancy between the "ending" distribution of the diffusion and the "starting" distribution of the reverse diffusion processes, and (ii) of the *score* matching objective. Crucially, our analysis does not rely on assumptions on the quality of the score models. We explicitly study the existence of a trade-off and explore experimentally, for the first time, current approaches for selecting the diffusion time T.
|
64 |
+
|
65 |
+
Contribution 2: In § 3 we propose a novel method to improve *both* training and sampling efficiency of diffusion-based models, while maintaining high sample quality. Our method introduces an auxiliary distribution, allowing us to transform the simple "starting" distribution of the reverse process used in the literature so as to minimize the discrepancy to the "ending" distribution of the forward process. Then, a standard reverse diffusion can be used to closely match the data distribution. Intuitively, our method allows to build "bridges" across multiple distributions, and to set T toward the advantageous regime of small diffusion times. In addition to our methodological contributions, in § 4, we provide experimental evidence of the benefits of our method, in terms of sample quality and log likelihood. Finally, we conclude in § 5.
|
66 |
+
|
67 |
+
Related Work. A concurrent work by Zheng et al. (2022) presents an empirical study of a truncated diffusion process, but lacks a rigorous analysis, and a clear justification for the proposed approach. Recent attempts by Lee et al. (2022b) to optimize p*noise* , or the proposal to do so (Austin et al., 2021) have been studied in different contexts. Related work focus primarily on improving sampling efficiency, using a wide array of techniques. Sample generation times can be drastically reduced considering adaptive step-size integrators (Jolicoeur-Martineau et al., 2021). Other popular choices are based on merging multiple steps of a pretrained model through distillation techniques (Salimans & Ho, 2022) or by taking larger sampling steps with GANs (Xiao et al., 2022). Approaches closer to ours *modify* the sde, or the discrete time processes, to obtain inference efficiency gains. In particular, Song et al. (2021a) considers implicit non-Markovian diffusion processes, while Watson et al. (2021) changes the diffusion processes by optimal scheduling selection and Dockhorn et al. (2022) considers overdamped sdes. Finally, hybrid techniques combining VAEs and diffusion models (Vahdat et al., 2021) or simple auto encoders and diffusion models (Rombach et al., 2022), have positive effects on training and sampling times.
|
68 |
+
|
69 |
+
## 2 A New Elbo Decomposition And A Tradeoff On Diffusion **Timeexploring A Tradeoff** On Diffusion Time
|
70 |
+
|
71 |
+
The dynamics of a diffusion model can be studied through the lens of variational inference, which allows us to bound the (log-)likelihood using an evidence lower bound (elbo) (Huang et al., 2021).
|
72 |
+
|
73 |
+
Our interpretation The interpretation we consider in this work (see also Song et al. (2021b), Thm. 1)
|
74 |
+
emphasizes the two main factors affecting the quality of sample generation: an imperfect *score*, and a mismatch, measured in terms of the Kullback-Leibler (kl) divergence, between the noise distribution p(x, T)
|
75 |
+
of the forward process and the distribution p*noise* used to initialize the backward process.
|
76 |
+
|
77 |
+
## 2.1 Preliminaries: **The Elbo Decomposition**
|
78 |
+
|
79 |
+
Our goal is to study the quality of the generated data distribution as a function of the diffusion time T. Then, instead of focusing on the log-likelihood bounds for single datapoints log q(x, T), we consider the average over the data distribution, i.e. the cross-entropy Ep*data* (x)log q(x, T). By rewriting the Lelbo derived in Huang et al. (2021, Eq. (25)) (details of the steps in the Appendix), we have that
|
80 |
+
|
81 |
+
$$\mathbb{E}_{p_{\sf{Min}}\,(\mathbf{\omega})}\log q(\mathbf{x},T)\geq\mathcal{L}_{\textsc{ELBO}}(\mathbf{s_{\theta}},T)=\mathbb{E}_{\sim(1)}\log p_{\sf{base}}\left(\mathbf{x}_{T}\right)-I(\mathbf{s_{\theta}},T)+R(T).\tag{4}$$
|
82 |
+
where R(T) = 12 R T t=0 E∼(1) hg 2(t)∥∇ log p(xt, t| x0)∥ 2 − 2f ⊤(xt, t)∇ log p(xt, t| x0) idt, and I(sθ, T) = 1 2 R T t=0 g 2(t)E∼(1) h∥sθ(xt, t) − ∇ log p(xt, t| x0)∥ 2idt is equal to the loss term Eq. (3) when λ(t) = g 2(t).
|
83 |
+
Note that R(T) depends neither on sθ nor on p*noise* , while I(sθ, T), or an equivalent reparameterization
|
84 |
+
(Huang et al., 2021; Song et al., 2021b, Eq. (1)), is used to learn the approximated *score*, by optimization of the parameters θ. It is then possible to show that
|
85 |
+
|
86 |
+
$$I(\mathbf{s_{\theta}},T)\geq\underbrace{I(\mathbf{\nabla}\log p,T)}_{\stackrel{{\mathrm{def}}}{{=}}K(T)}=\frac{1}{2}\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(1)}\left[\left\|\mathbf{\nabla}\log p(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\mid\mathbf{x}_{0})\right\|\right]^{2}\mathrm{d}t.\tag{5}$$
|
87 |
+
|
88 |
+
Note that the term K(T) = I(∇ log *p, T*) does not depend on θ. Consequently, we can define G(sθ, T) =
|
89 |
+
I(sθ, T) − K(T) (see Appendix for details), where G(sθ, T) is a positive term that we call the gap term, accounting for the practical case of an imperfect *score*, i.e. sθ(xt, t) ̸= ∇ log p(xt, t). It also holds that
|
90 |
+
|
91 |
+
$$\mathbb{E}_{\sim(1)}\log p_{\mathsf{feas}}(\mathbf{x}_{T})=\int\left[\log p_{\mathsf{feas}}(\mathbf{x})-\log p(\mathbf{x},T)+\log p(\mathbf{x},T)\right]p(\mathbf{x},T)\mathrm{d}\mathbf{x}=$$ $$=\mathbb{E}_{\sim(1)}\log p(\mathbf{x}_{T},T)-\mathrm{KL}\left[\log p(\mathbf{x},T)\parallel p_{\mathsf{feas}}(\mathbf{x})\right].$$
|
92 |
+
|
93 |
+
Therefore, we can substitute the cross-entropy term E∼(1)log p*noise* (xT ) of rewrite the elbo in Eq. (4) as to obtain
|
94 |
+
|
95 |
+
$$\mathbb{E}_{p_{\sf{data}}\left(\mathbf{x}\right)}\log q(\mathbf{x},T)\geq-\text{KL}\left[p(\mathbf{x},T)\parallel p_{\sf{data}}(\mathbf{x})\right]+\mathbb{E}_{\sim(1)}\log p(\mathbf{x}_{T},T)-K(T)+R(T)-\mathcal{G}(\mathbf{s_{\theta}},T).\tag{7}$$
|
96 |
+
|
97 |
+
Before concluding our derivation it is necessary to introduce an important Proposition observation (formal proof in Appendix), where we show how to combine different terms of Eq. (7) into the negative entropy term Ep*data* (x)log p*data* (x).
|
98 |
+
|
99 |
+
Proposition 1. Given the stochastic dynamics defined in Eq. (1)*, it holds that*
|
100 |
+
|
101 |
+
$$\mathbb{E}_{\sim({\LARGE1})}\log p(\mathbf{x}_{T},T)-K(T)+R(T)=\mathbb{E}_{p_{\sf{data}}(\mathbf{x})}\log p_{\sf{data}}(\mathbf{x}).$$
|
102 |
+
Finally, we can now bound the value of Ep*data* (x)log q(x, T) as
|
103 |
+
$$\mathbb{E}_{p_{\mathsf{data}}\,(\mathbf{x})}\log q(\mathbf{x},T)\geq\underbrace{\mathbb{E}_{p_{\mathsf{data}}\,(\mathbf{x})}\log p_{\mathsf{data}}\,(\mathbf{x})-\mathcal{G}(\mathbf{s}_{\mathbf{\theta}},T)-\operatorname{KL}\left[p(\mathbf{x},T)\parallel p_{\mathsf{mean}}(\mathbf{x})\right]}_{C_{\mathsf{train}}(\mathbf{s}_{\mathbf{\theta}},T)}.$$
|
104 |
+
$$\mathbf{\Sigma}$$
|
105 |
+
$$\mathbf{\Sigma}$$
|
106 |
+
|
107 |
+
$$\mathbf{\Sigma}$$
|
108 |
+
|
109 |
+
Eq. (9) clearly emphasizes the roles of an approximate score function, through the gap term G(·), and the discrepancy between the noise distribution of the forward process, and the initial distribution of the reverse
|
110 |
+
|
111 |
+
| Diffusion process | p(xt, t | x0) = N(m, sI) | pnoise (x) | | | |
|
112 |
+
|---------------------|----------------------------------|-------------------------------|----------------------------|---------|---------|
|
113 |
+
| Variance Exploding | α(t) = 0, g(t) = pdσ2(t) dt | m = x0, s = σ 2 (t) − σ 2 (0) | N(0, (σ 2 (T) − σ 2 (0))I) | | |
|
114 |
+
| − 1 2 R t | − R t | | | | |
|
115 |
+
| Variance Preserving | α(t) = − 1 2 β(t), g(t) = p β(t) | m = e | 0 β(dτ) x0, s = 1 − e | 0 β(dτ) | N(0, I) |
|
116 |
+
|
117 |
+
Table 1: Two main families of diffusion processes, where σ 2(t) = σ 2 max σ2 min tand β(t) = β0 + (β1 − β0)t
|
118 |
+
process, through the kl term. The (negative) entropy term Ep*data* (x)log p*data* (x), which is constant w.r.t T
|
119 |
+
and θ, is the best value achievable by the elbo. Indeed, by rearranging Eq. (9), kl [q(x, T) ∥ p*data* (x)] ≤
|
120 |
+
G(sθ, T) + kl [p(x, T) ∥ p*noise* (x)]. Optimality is achieved when i) we have perfect *score* matching and ii)
|
121 |
+
the initial conditions for the reverse process are ideal, i.e. q(x, 0) = p(x, T) In the ideal case of perfect score matching, the elbo in Eq. (9) is attained with equality. If, in addition, the initial conditions for the reverse process are ideal, i.e. q(x, 0) = p(x, T), then the results in Anderson (1982) allow us to claim that q(x, T) = p*data* (x).
|
122 |
+
|
123 |
+
Next, we show the existence of a tradeoff: the kl decreases with T, while the gap increases with T.
|
124 |
+
|
125 |
+
## 2.2 The Tradeoff On Diffusion Time
|
126 |
+
|
127 |
+
We begin by showing that the kl term in Eq. (9) decreases with the diffusion time T, which induces to select large T to maximize the elbo. We consider the two main classes of sdes for the forward diffusion process defined in Eq. (1): sdes whose steady state distribution is the standard multivariate Gaussian, referred to as Variance Preserving (VP), and sdes without a stationary distribution, referred to as *Variance Exploding*
|
128 |
+
(VE), which we summarize in Table 1. The standard approach to generate new samples relies on the backward process defined in Eq. (2), and consists in setting p*noise* in agreement with the form of the forward process sde. The following result bounds the discrepancy between the noise distribution p(x, T) and p*noise* .
|
129 |
+
|
130 |
+
Lemma 1. For the classes of sdes considered (Table 1), the discrepancy between p(x, T) and the p*noise*(x)
|
131 |
+
can be bounded as follows.
|
132 |
+
|
133 |
+
For Variance Preserving sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C1 exp−R T
|
134 |
+
0
|
135 |
+
|
136 |
+
$$\beta(t)d t)$$
|
137 |
+
|
138 |
+
For Variance Exploding sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C21 σ2(T)−σ2(0) .
|
139 |
+
|
140 |
+
Our proof uses results from Villani (2009), the logarithmic Sobolev Inequality and Gronwall inequality (see Appendix for details). The consequence of Lemma 1 is that to maximize the elbo, the diffusion time T
|
141 |
+
should be as large as possible (ideally, T → ∞), such that the kl term vanishes. This result is in line with current practices for training score-based diffusion processes, that argue for sufficiently long diffusion times
|
142 |
+
(De Bortoli et al., 2021). Our analysis, on the other hand, highlights how this term is only one of the two contributions to the elbo.
|
143 |
+
|
144 |
+
Now, we focus our attention on studying the behavior of the second component, G(·). Before that, we define a few quantities that allow us to write the next important result.
|
145 |
+
|
146 |
+
Definition 1. We define the optimal score bsθ for any diffusion time T*, as the score obtained using parameters* that minimize I(sθ, T). Similarly, we define the optimal score gap G(bsθ, T) for any diffusion time T*, as the* gap attained when using the optimal score.
|
147 |
+
|
148 |
+
Lemma 2. The optimal score gap term G(bsθ, T) is a non-decreasing function in T. That is, given T2 > T1, and θ1 = arg minθ I(sθ, T1), θ2 = arg minθ I(sθ, T2)*, then* G(sθ2
|
149 |
+
, T2) ≥ G(sθ1
|
150 |
+
, T1).
|
151 |
+
|
152 |
+
The proof (see Appendix) is a direct consequence of the definition of G and the optimality of the score. Note that Lemma 2 does not imply that G(sθa
|
153 |
+
, T2) ≥ G(sθb
|
154 |
+
, T1) holds for generic parameters θa, θb.
|
155 |
+
|
156 |
+
## 2.3 Is There An Optimal Diffusion Time?
|
157 |
+
|
158 |
+
While diffusion processes are generally studied for T → ∞, for practical reasons, diffusion times in score-based models have been arbitrarily set to be "sufficiently large" in the literature. Here we formally argue, for the first time, about the existence of an optimal diffusion time, which strikes the right balance between the gap G(·) and the kl terms of the elbo in Eq. (9).
|
159 |
+
|
160 |
+
Before proceeding any further, we clarify that our final objective is not to find and use an optimal diffusion time. Instead, our result on the existence of optimal diffusion times (which can be smaller than the ones set by than popular heuristics) serves the purpose of motivating the choice of small diffusion times, which however call for a method to overcome approximation errors.
|
161 |
+
|
162 |
+
Proposition 2. Consider the elbo decomposition in Eq. (9). We study it as a function of time T, and seek its optimal argument T
|
163 |
+
⋆ = arg maxT Lelbo(bsθ, T)*. Then, the optimal diffusion time* T
|
164 |
+
⋆ ∈ R
|
165 |
+
+, and thus not necessarily T
|
166 |
+
⋆ = ∞. Additional assumptions on the gap term G(·) *can be used to guarantee strict finiteness* of T
|
167 |
+
⋆. There exists at least one optimal diffusion *time* T
|
168 |
+
⋆in the interval [0, ∞], which maximizes the elbo, that is T
|
169 |
+
⋆ = arg maxT Lelbo(bsθ, T).
|
170 |
+
|
171 |
+
It is trivial to verify that, since the optimal gap term G(bsθ, T) is a non decreasing function in T (Lemma 2),
|
172 |
+
we have ∂G
|
173 |
+
∂T ≥ 0.Then, we study the sign of the kl derivative, which is always negative as shown in the Appendix. Moreover, we know that that lim T→∞
|
174 |
+
∂kl
|
175 |
+
∂T = 0. Consequently, the function ∂Lelbo
|
176 |
+
∂T =
|
177 |
+
∂G
|
178 |
+
∂T +
|
179 |
+
∂kl
|
180 |
+
∂T has at least one zero in its domain R
|
181 |
+
+. To guarantee a stricter bounding of T
|
182 |
+
⋆, we could study asymptotically the growth rates of G and the kl terms for large T. The investigation is technically involved and outside the scope of this paper. Nevertheless, as discussed hereafter, the numerical investigation carried out in this work suggests finiteness of T
|
183 |
+
⋆.
|
184 |
+
|
185 |
+
While the proof for the general case is available in the Appendix, the analytic solution for the optimal diffusion time is elusive, as a full characterization of the gap term is particularly challenging. Additional assumptions would guarantee boundedness of T
|
186 |
+
⋆.
|
187 |
+
|
188 |
+
![5_image_0.png](5_image_0.png)
|
189 |
+
|
190 |
+
Figure 2: elbo decomposition, elbo and likelihood for a 1D toy model, as a function of diffusion time T. Tradeoff and optimality numerical results confirm our theory.
|
191 |
+
Empirically, we use Fig. 2 to illustrate the tradeoff and the optimality arguments through the lens of the same toy example we use in § 1. On first and third column, we show the elbo decomposition. We can verify that G(sθ, T) is an increasing function of T, whereas the kl term is a decreasing function of T. Even in the simple case of a toy example, the tension between small and large values of T is clear. On the second and fourth, we show the values of the elbo and of the likelihood as a function of T. We then verify the validity of our claims: the elbo is neither maximized by an infinite diffusion time, nor by a "sufficiently large" value.
|
192 |
+
|
193 |
+
Instead, there exists an optimal diffusion time which, for this example, is smaller than what is typically used in practical implementations, i.e. T = 1.0 In the Appendix , we show that optimizing the elbo to obtain an optimal diffusion time T
|
194 |
+
⋆is technically feasible, without resorting to exhaustive grid search. In § 3, we present a new method that admits much smaller diffusion times, and show that the elbo of our approach is at least as good as the one of a standard diffusion model, configured to use its optimal diffusion time T
|
195 |
+
⋆.
|
196 |
+
|
197 |
+
## 2.4 Relation With Diffusion Process Noise Schedule
|
198 |
+
|
199 |
+
We remark that a simple modification of the noise schedule to steer the the diffusion process toward a small diffusion time (Kingma et al., 2021; Bao et al., 2022) is not a viable solution. In the Appendix, we discuss how the optimal value of the elbo, in the case of affine sdes, is *invariant* to the choice of the noise schedule.
|
200 |
+
|
201 |
+
Indeed, its value depends uniquely on the relative level of corruption of the initial data at the considered final diffusion time T, that is, the *Signal to Noise Ratio*. Naively, we could think that by selecting a twice as fast noise schedule, we would be able to obtain the same elbo of the original schedule by diffusing only for half the time. While true, this does not provide any practical benefit in terms of computational complexity. If the noise schedule is faster, the drift terms involved in the reverse process changes more rapidly. Consequently, to *simulate* the reverse sde with a numerical integration scheme, smaller step sizes are required to keep the same accuracy of the original noise schedule simulation. The net effect is that while the diffusion time for the continuous time dynamics is smaller, the number of integration steps is larger, with a zero net gain. The optimization of the noise schedule can however have important practical effects in terms of stability of the training and variance of the estimations, that we do not tackle in this work (Kingma et al., 2021).
|
202 |
+
|
203 |
+
## 2.5 Relation With Literature On Bounds And Goodness Of Score Assumptions
|
204 |
+
|
205 |
+
Few other works in the literature attempt to study the convergence properties of Diffusion models. In the work of De Bortoli et al. (2021) (Thm. 1), a total variation (TV) bound between the generated and data distribution is obtained in the form C1 exp(a1T) + C2 exp(−a2T), where the constant C1 depends on the maximum error over [0, T] between the true and approximated score, i.e. maxt∈[0,T] ∥sθ(x, t) − ∇ log p(x, t)∥. In the work of De Bortoli (2022) the requirement is relaxed by setting maxt∈[0,T]
|
206 |
+
σ 2(t)
|
207 |
+
1+∥x∥
|
208 |
+
∥sθ(x, t) − ∇ log p(x, t)∥, where the 1-Wasserstein distance between generated and true data is bounded as C1 + C2 exp(−a2T) + C3 (Thm. 1).
|
209 |
+
|
210 |
+
Other works consider the more realistic average square norm instead of the infinity norm, which is consistent with standard training of diffusion models. Moreover, Lee et al. (2022a) show how the TV bound can be expressed as a function of maxt∈[0,T] E
|
211 |
+
h∥sθ(xt, t) − ∇ log p(xt, t)∥
|
212 |
+
2i(Thms. 2.2,3.1,3.2). Related to our work, Lee et al. (2022a) find that the TV bound is optimized for a diffusion time that depends, among others, on the maximum score error. Finally, the work by Chen et al. (2022) (Thm. 2), which is concurrent to ours, shows that if maxt∈[0,T] E
|
213 |
+
h∥sθ(xt, t) − ∇ log p(xt, t)∥
|
214 |
+
2iis bounded, then the TV distance between true and generated data can be bound as C1 exp(−a1T) + √ϵT, plus a discretization error.
|
215 |
+
|
216 |
+
All prior approaches require assumptions on the maximum score error, which *implicitly* depends on: i) the maximum diffusion time T and ii) the class of parametric score networks considered. Hence, such methods allow for the study of convergence properties, but with the following limitations. It is not clear how the score error behaves as the fitting domain ([0, T]) is increased, for generic class of parametric functions and generic p*data* . Moreover, it is difficult to link the error assumptions with the actual training loss of diffusion models.
|
217 |
+
|
218 |
+
In this work, instead, we follow a more agnostic path, as we make no assumptions about the error behavior.
|
219 |
+
|
220 |
+
We notice that the optimal gap term is **always** a non decreasing function of T. First, we question whether current best practice for setting diffusion times are adequate: we find that in realistic implementations, diffusion times are larger than necessary. Second, we introduce an new approach, with provably the same performance of standard diffusion models but lower computational complexity, as highlighted in § 3.
|
221 |
+
|
222 |
+
## 3 A New, Practical Method For Decreasing Diffusion Times
|
223 |
+
|
224 |
+
The elbo decomposition in Eq. (9) and the bounds in Lemma 1 and Lemma 2 highlight a dilemma. We thus propose a simple method that allows us to achieve **both** a small gap G(sθ, T), and a small discrepancy kl [p(x, T) ∥ p*noise* (x)]. Before that, let us use Fig. 3 to summarize all densities involved and the effects of the various approximations, which will be useful to visualize our proposal.
|
225 |
+
|
226 |
+
The data distribution p*data* (x) is transformed into the noise distribution p(x, T) through the forward diffusion process. Ideally, starting from p(x, T)
|
227 |
+
we can recover the data distribution by simulating using the exact score ∇ log p. Using the approximated score sθ and the same initial conditions, the backward process ends up in q
|
228 |
+
(1)(x, T), whose discrepancy 1 to pdata (x) is G(sθ, T). However, the distribution p(x, T) is unknown and replaced with an easy distribution p*noise* (x), accounting for an error a measured as kl [p(x, T) ∥ p*noise* (x)]. With score and initial distribution approximated, the backward process ends up in q
|
229 |
+
(3)(x, T), where the discrepancy 3 from p*data* is the sum of terms G(sθ, T) + kl [p(x, T) ∥ p*noise* ].
|
230 |
+
|
231 |
+
![7_image_0.png](7_image_0.png)
|
232 |
+
|
233 |
+
Figure 3: Intuitive illustration of the forward and backward diffusion processes. Discrepancies between distributions are illustrated as distances. Color coding discussed in the text.
|
234 |
+
Multiple bridges across densities. In a nutshell, our method allows us to reduce the gap term by selecting smaller diffusion times and by using a learned auxiliary model to transform the initial density p*noise* (x) into a density νϕ(x), which is as close as possible to p(x, T), thus avoiding the penalty of a large kl term. To implement this, we first *transform* the simple distribution p*noise* into the distribution ��ϕ(x), whose discrepancy b kl [p(x, T) ∥ νϕ(x)] is smaller than a . Then, starting from from the auxiliary model νϕ(x),
|
235 |
+
we use the approximate score sθ to simulate the backward process reaching q
|
236 |
+
(2)(x, T). This solution has a discrepancy 2 from the data distribution of G(sθ, T) + kl [p(x, T) ∥ νϕ(x)], which we will quantify later in the section. Intuitively, we introduce two bridges. The first bridge connects the noise distribution p*noise* to an auxiliary distribution νϕ(x) that is as close as possible to that obtained by the forward diffusion process.
|
237 |
+
|
238 |
+
The second bridge—a standard reverse diffusion process—connects the smooth distribution νϕ(x) to the data distribution. Notably, our approach has important guarantees, which we discuss next.
|
239 |
+
|
240 |
+
## 3.1 Auxiliary Model Fitting And Guarantees
|
241 |
+
|
242 |
+
We begin by stating the requirements we consider for the density νϕ(x). First, as it is the case for p*noise* , it should be easy to generate samples from νϕ(x) in order to initialize the reverse diffusion process. Second, the auxiliary model should allow us to compute the likelihood of the samples generated through the overall generative process, which begins in p*noise* , passes through νϕ(x), and arrives in q(x, T).
|
243 |
+
|
244 |
+
The fitting procedure of the auxiliary model is straightforward. First, we recognize that minimizing kl [p(x, T) ∥ νϕ(x)] w.r.t ϕ also minimizes Ep(x,T)[log νϕ(x)], that we can use as loss function. To obtain the set of optimal parameters ϕ
|
245 |
+
⋆, we require samples from p(x, T), which can be easily obtained even if the density p(x, T) is not available. Indeed, by sampling from p*data* , and p(x, T | x0), we obtain an unbiased Monte Carlo estimate of Ep(x,T)[log νϕ(x)], and optimization of the loss can be performed. Note that due to the affine nature of the drift, the conditional distribution p(x, T | x0) is easy to sample from, as shown in Table 1. From a practical point of view, it is important to notice that the fitting of νϕ is independent from the training of the score-matching objective, i.e. the result of I(sθ) does not depend on the shape of the auxiliary distribution νϕ. This observation indicates that the two training procedures can be run concurrently, thus enabling considerable time savings.
|
246 |
+
|
247 |
+
Next, we show that the first bridge in our model reduces the kl term, even for small diffusion times.
|
248 |
+
|
249 |
+
Proposition 3. Let's assume that pnoise(x) is in the family spanned by νϕ, i.e. there exists ϕe *such that*
|
250 |
+
νϕe = pnoise*. Then we have that*
|
251 |
+
$$KL\left[p(\mathbf{x},T)\parallel\nu_{\mathbf{\phi}^{*}}(\mathbf{x})\right]\leq KL\left[p(\mathbf{x},T)\parallel\nu_{\mathbf{\phi}}^{*}(\mathbf{x})\right]=KL\left[p(\mathbf{x},T)\parallel p_{\sf meas}(\mathbf{x})\right].\tag{10}$$
|
252 |
+
Since we introduce the auxiliary distribution ν, we shall define a new elbo for our method:
|
253 |
+
|
254 |
+
$T$) = $\mathbb{K}_{\text{pdelta}}\left(x\right)$ l0.
|
255 |
+
L
|
256 |
+
ϕ elbo(sθ, T) = Ep*data* (x)log pdata (x) − G(sθ, T) − kl [p(x, T) ∥ νϕ(x)] (11)
|
257 |
+
|
258 |
+
$$(11)$$
|
259 |
+
|
260 |
+
Recalling that bsθ is the optimal score for a generic time T, Proposition 3 allows us to claim that L
|
261 |
+
ϕ
|
262 |
+
⋆
|
263 |
+
elbo(bsθ, T) ≥
|
264 |
+
Lelbo(bsθ, T). Then, we can state the following important result:
|
265 |
+
Proposition 4. *Given the existence of* T
|
266 |
+
⋆, defined as the diffusion time such that the elbo *is maximized*
|
267 |
+
(Proposition 2), there exists at least one diffusion time τ ≤ T
|
268 |
+
⋆*, such that* L
|
269 |
+
ϕ
|
270 |
+
⋆
|
271 |
+
elbo(bsθ, τ ) ≥ Lelbo(bsθ, T ∗).
|
272 |
+
|
273 |
+
Proposition 4 has two interpretations. On the one hand, given two score models optimally trained for their respective diffusion times, our approach guarantees an elbo that is at least as good as that of a standard diffusion model configured with its optimal time T
|
274 |
+
⋆. Our method achieves this with a smaller diffusion time τ , which offers sampling efficiency and generation quality. On the other hand, if we settle for an equivalent elbo for the standard diffusion model and our approach, with our method we can afford a sub-optimal score model, that requires a smaller computational budget to be trained, while guaranteeing shorter sampling times. We elaborate on this interpretation in § 4, where our approach obtains substantial savings in terms of training iterations.
|
275 |
+
|
276 |
+
A final note is in order. The choice of the auxiliary model depends on the selected diffusion time. The larger the T, the "simpler" the auxiliary model can be. Indeed, the noise distribution p(x, T) approaches p*noise* , so that a simple auxiliary model is sufficient to transform p*noise* into a distribution νϕ. Instead, for a small T,
|
277 |
+
the distribution p(x, T) is closer to the data distribution. Then, the auxiliary model requires high flexibility and capacity. In § 4, we substantiate this discussion with numerical examples and experiments on real data.
|
278 |
+
|
279 |
+
## 3.2 Comparison With Schrödinger Bridges
|
280 |
+
|
281 |
+
In this Section, we briefly compare our method with the Schrödinger bridges approach (Chen et al., 2021b;a; De Bortoli et al., 2021), which allows one to move from an arbitrary pnoise to p*data* in any finite amount of time T. This is achieved by simulating the sde
|
282 |
+
|
283 |
+
$$\mathrm{d}\mathbf{x}_{t}=\left[-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log\hat{\psi}(\mathbf{x}_{t},t^{\prime})\right]\mathrm{d}t+g(t^{\prime})\mathrm{d}\mathbf{w}_{t},\quad\mathbf{x}_{0}\sim\mathbf{j}_{\mathrm{max}}\,,\tag{12}$$
|
284 |
+
|
285 |
+
where *ψ, ψ* ˆ solve the Partial Differential Equation (pde) system
|
286 |
+
|
287 |
+
$$\begin{cases}\frac{\partial\psi(\mathbf{x},t)}{\partial t}=-\mathbf{\nabla}^{\top}\left(\psi(\mathbf{x},t)\right)\mathbf{f}(\mathbf{x},t)-\frac{g^{2}(t)}{2}\Delta(\psi(\mathbf{x},t)),\\ \frac{\partial\psi}{\partial t}=-\mathbf{\nabla}^{\top}\left(\hat{\psi}(\mathbf{x},t)\mathbf{f}(\mathbf{x},t)\right)+\frac{g^{2}(t)}{2}\Delta(\hat{\psi}(\mathbf{x},t)),\end{cases}\qquad\quad\psi(\mathbf{x},0)\hat{\psi}(\mathbf{x},0)=p_{\texttt{data}}\left(\mathbf{x},t\right),$$
|
288 |
+
$\psi(\mathbf{x},0)\hat{\psi}(\mathbf{x},0)=p_{\texttt{data}}\left(\mathbf{x}\right),\psi(\mathbf{x},T)\hat{\psi}(\mathbf{x},T)=p_{\texttt{data}}\left(\mathbf{x}\right)$.
|
289 |
+
$\mathbf{a}\cdot\mathbf{a}=0$.
|
290 |
+
(13)
|
291 |
+
This approach presents drawbacks compared to classical Diffusion models. First, the functions ψ, ψˆ are not known, and their parametric approximation is costly and complex. Second, it is much harder to obtain quantitative bounds between true and generated data as a function of the quality of such approximations.
|
292 |
+
|
293 |
+
The *ψ, ψ* ˆ estimation procedure simplifies considerably in the particular case where p*noise* (x) = p(x, T), for arbitrary T. The solution of Eq. (13) is indeed ψ(x, t) = 1, ψˆ(x, t) = p(x, t). The first pde of the system is satisfied when ψ is a constant. The second pde is the Fokker-Planck equation, satisfied by ψˆ(x, t) = p(x, t).
|
294 |
+
|
295 |
+
Boundary conditions are also satisfied. In this scenario, a sensible objective is the score-matching, as getting
|
296 |
+
∇ log ψˆ equal to the true score ∇ log p allows perfect generation.
|
297 |
+
|
298 |
+
Unfortunately, it is difficult to generate samples from p(x, T), the starting conditions of Eq. (12). A trivial solution is to select T → ∞ in order to have p*noise* as the simple and analytically known steady state distribution of Eq. (1). This corresponds to the classical diffusion models approach, which we discussed in the previous Sections. An alternative solution is to keep T finite and *cover* the first part of the bridge from pnoise to p(x, T)
|
299 |
+
with an auxiliary model. This provides a different interpretation of our method, which allows for smaller diffusion times while keeping good generative quality.
|
300 |
+
|
301 |
+
## 3.3 An Extension For Density Estimation
|
302 |
+
|
303 |
+
Diffusion models can be also used for density estimation by transforming the diffusion sde into an equivalent Ordinary Differential Equation (ode) whose marginal distribution p(x, t) at each time instant coincide to that of the corresponding sde (Song et al., 2021c). The exact equivalent ode requires the score ∇ log p(xt, t),
|
304 |
+
which in practice is replaced by the score model sθ, leading to the following ode
|
305 |
+
|
306 |
+
$${\rm d}\mathbf{x}_{t}=\left(\mathbf{f}(\mathbf{x}_{t},t)-\frac{1}{2}g(t)^{2}\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\right){\rm d}t\quad\mbox{with}\quad\mathbf{x}_{0}\sim p_{\rm data}\,\tag{14}$$
|
307 |
+
|
308 |
+
whose time varying probability density is indicated with pe(x, t). Note that the density pe(x, t), is in general not equal to the density p(x, t) associated to Eq. (1), with the exception of perfect score matching (Song et al., 2021b). The reverse time process is modeled as a Continuous Normaxlizing Flow (cnf) (Chen et al.,
|
309 |
+
2018; Grathwohl et al., 2019) initialized with distribution p*noise* (x); then, the likelihood of a given value x0 is
|
310 |
+
|
311 |
+
$$\log\widetilde{p}(\mathbf{x}_{0})=\log p_{\mathsf{train}}(\mathbf{x}_{T})+\int\limits_{t=0}^{T}\mathbf{\nabla}\mathbf{\cdot}\left(\mathbf{f}(\mathbf{x}_{t},t)-\frac{1}{2}g(t)^{2}\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\right)\mathrm{d}t.\tag{15}$$
|
312 |
+
|
313 |
+
To use our proposed model for density estimation, we also need to take into account the ode dynamics. We focus again on the term log p*noise* (xT ) to improve the expected log likelihood. For consistency, our auxiliary density νϕ should now maximize E∼(14)log νϕ(xT ) instead of E∼(1)log νϕ(xT ). However, the simulation of Eq. (14) requires access to sθ which, in the endeavor of density estimation, is available only once the score model has been trained. Consequently optimization w.r.t. ϕ can only be performed sequentially, whereas for generative purposes it could be done concurrently. While the sequential version is expected to perform better, experimental evidence indicates that improvements are marginal, justifying the adoption of the more efficient concurrent version.
|
314 |
+
|
315 |
+
## 4 Experiments
|
316 |
+
|
317 |
+
We now present numerical results on the mnist and cifar10 datasets, to support our claims in §§ 2 and 3.
|
318 |
+
|
319 |
+
We follow a standard experimental setup (Song et al., 2021a;b; Huang et al., 2021; Kingma et al., 2021): we use a standard U-Net architecture with time embeddings (Ho et al., 2020) and we report the log-likelihood in terms of bit per dimension (bpd) and the Fréchet Inception Distance (fid) scores (uniquely for cifar10).
|
320 |
+
|
321 |
+
Although the fid score is a standard metric for ranking generative models, caution should be used against over-interpreting fid improvements (Kynkäänniemi et al., 2022). Similarly, while the theoretical properties of the models we consider are obtained through the lens of elbo maximization, the log-likelihood measured in terms of bpd should be considered with care (Theis et al., 2016). Finally, we also report the number of neural function evaluations (nfe) for computing the relevant metrics. We compare our method to the standard score-based model (Song et al., 2021c). The full description on the experimental setup is presented in Appendix.
|
322 |
+
|
323 |
+
On the existence of T
|
324 |
+
⋆. We look for further empirical evidence of the existence of a T
|
325 |
+
⋆ < ∞, as stated in Proposition 2. For the moment, we shall focus on the baseline model (Song et al., 2021c),
|
326 |
+
where no auxiliary models are introduced. Results are reported in Table 2. For mnist, we observe how times T = 0.6 and T = 1.0 have comparable performance in terms of bpd, implying that any T ≥ 1.0 is at best unnecessary and generally detrimental.
|
327 |
+
|
328 |
+
Similarly, for cifar10, it is possible to notice that the best value of bpd is achieved for T = 0.6, outperforming all other values.
|
329 |
+
|
330 |
+
Table 2: Optimal T in (Song et al., 2021c)
|
331 |
+
|
332 |
+
| Dataset | Time T | bpd (↓) |
|
333 |
+
|-----------|----------|-----------|
|
334 |
+
| mnist | 1.0 | 1.16 |
|
335 |
+
| 0.6 | 1.16 | |
|
336 |
+
| 0.4 | 1.25 | |
|
337 |
+
| 0.2 | 1.75 | |
|
338 |
+
| cifar10 | 1.0 | 3.09 |
|
339 |
+
| 0.6 | 3.07 | |
|
340 |
+
| 0.4 | 3.09 | |
|
341 |
+
| 0.2 | 3.38 | |
|
342 |
+
|
343 |
+
Our auxiliary models. In § 3 we introduced an auxiliary model to minimize the mismatch between initial distributions of the backward process. We now specify the family of parametric distributions we have considered. Clearly, the choice of an auxiliary model also depends on the data distribution, in addition to the choice of diffusion time T.
|
344 |
+
|
345 |
+
For our experiments, we consider two auxiliary models: (i)
|
346 |
+
a Dirichlet process Gaussian mixture model (dpgmm) (Rasmussen, 1999; Görür & Edward Rasmussen, 2010) for mnist and (ii) Glow Kingma & Dhariwal (2018), a flexible normalizing flow for cifar10. Both of them satisfy our requirements: they allow exact likelihood computation and they are equipped with a simple sampling procedure. As discussed in § 3, auxiliary model complexity should be adjusted as a function of T. This is confirmed experimentally in Fig. 4, where we use the number of mixture components of the dpgmm as a proxy to measure the complexity of the auxiliary model.
|
347 |
+
|
348 |
+
![10_image_0.png](10_image_0.png)
|
349 |
+
|
350 |
+
Figure 4: Complexity of the auxiliary model as function of diffusion time (reported median and 95 quantiles on 4 random seeds).
|
351 |
+
Reducing T **with auxiliary models.** We now show how it is possible to obtain a comparable (or better)
|
352 |
+
performance than the baseline model for a wide range of diffusion times T. For mnist, setting τ = 0.4 produces good performance both in terms of bpd (Table 3) and visual sample quality (Fig. 5). We also consider the sequential extension (S) to compute the likelihood, but remark marginal improvements compared to a concurrent implementation. Similarly for the cifar10 dataset, in Table 4 we observe how our method achieves better bpd than the baseline diffusion for T = 1. Moreover, our approach outperforms the baselines for the corresponding diffusion time in terms of fid score (additional non-curated samples in the Appendix).
|
353 |
+
|
354 |
+
In Figure 10 we provide a non curated subset of qualitative results, showing that our method for a diffusion time equal to 0.4 still produces appealing images, while the vanilla approach fails. We finally notice how the proposed method has comparable performance w.r.t. several other competitors, while stressing that many orthogonal to our solutions (like diffusion in latent space (Vahdat et al., 2021), or the selection of higher order schemes (Jolicoeur-Martineau et al., 2021)) can actually be combined with our methodology.
|
355 |
+
|
356 |
+
Training and sampling efficiency In Fig. 7, the horizontal line corresponds to the best performance of a fully trained baseline model for T = 1.0 (Song et al., 2021c). To achieve the same performance of the baseline, variants of our method require fewer iterations, which translate in training efficiency. For the sake of fairness, the total training cost of our method should account for the auxiliary model training, which however can be done concurrently to the diffusion process. As an illustration for cifar10, using four GPUs, the baseline model requires ∼ 6.4 days of training. With our method we trained the auxiliary and diffusion models for ∼ 2.3 and 2 days respectively, leading to a total training time of max{2.3, 2} = 2.3 days. Similar training curves can be obtained for the mnist dataset, where the training time for dpgmms is negligible.
|
357 |
+
|
358 |
+
CIFAR10 200 400 600 800 1000 1200 Iterations (thousands)
|
359 |
+
3.1 3.2 3.3 ScoreSDE
|
360 |
+
Our (T = 0.2) Our (T = 0.4)
|
361 |
+
Our (T = 0.6)
|
362 |
+
BPD
|
363 |
+
Figure 7: Training curves of score models for different diffusion time T, recorded during the span of 1.3 millions iterations.
|
364 |
+
|
365 |
+
Table 3: Experiment results on mnist. For our method, (S) is for the extension in § 3.3
|
366 |
+
|
367 |
+
| | nfe(↓) | bpd (↓) |
|
368 |
+
|--------------------|----------|---------------|
|
369 |
+
| | Model | (ode) |
|
370 |
+
| ScoreSDE | 300 | 1.16 |
|
371 |
+
| ScoreSDE (T = 0.6) | 258 | 1.16 |
|
372 |
+
| Our (T = 0.6) | 258 | 1.16 1.14 (S) |
|
373 |
+
| ScoreSDE (T = 0.4) | 235 | 1.25 |
|
374 |
+
| Our (T = 0.4) | 235 | 1.17 1.16 (S) |
|
375 |
+
| ScoreSDE (T = 0.2) | 191 | 1.75 |
|
376 |
+
| Our (T = 0.2) | 191 | 1.33 1.31 (S) |
|
377 |
+
|
378 |
+
Figure 5: Visualization of some samples
|
379 |
+
|
380 |
+
![10_image_1.png](10_image_1.png)
|
381 |
+
|
382 |
+
| fid(↓) | bpd (↓) | nfe (↓) | nfe (↓) | |
|
383 |
+
|--------------------------------------------------|--------------------|---------------|-----------|-----|
|
384 |
+
| Model | (sde) | (ode) | | |
|
385 |
+
| ScoreSDE (Song et al., 2021c) | 3.64 | 3.09 | 1000 | 221 |
|
386 |
+
| ScoreSDE (T = 0.6) | 5.74 | 3.07 | 600 | 200 |
|
387 |
+
| ScoreSDE (T = 0.4) | 24.91 | 3.09 | 400 | 187 |
|
388 |
+
| ScoreSDE (T = 0.2) | 339.72 | 3.38 | 200 | 176 |
|
389 |
+
| Our (T = 0.6) | 3.72 | 3.07 | 600 | 200 |
|
390 |
+
| Our (T = 0.4) | 5.44 | 3.06 | 400 | 187 |
|
391 |
+
| Our (T = 0.2) | 14.38 | 3.06 | 200 | 176 |
|
392 |
+
| ARDM (Hoogeboom et al., 2022) | − | 2.69 | 3072 | |
|
393 |
+
| VDM(Kingma et al., 2021) | 4.0 | 2.49 | 1000 | |
|
394 |
+
| D3PMs (Austin et al., 2021) | 7.34 | 3.43 | 1000 | |
|
395 |
+
| DDPM (Ho et al., 2020) | 3.21 | 3.75 | 1000 | |
|
396 |
+
| Gotta Go Fast (Jolicoeur-Martineau et al., 2021) | 2.44 | − | 180 | |
|
397 |
+
| LSGM (Vahdat et al., 2021) | 2.10 | 2.87 | 120/138 | |
|
398 |
+
| ARDM-P (Hoogeboom et al., 2022) | − | 2.68/2.74 | 200/50 | |
|
399 |
+
| Real data | ScoreSDE (T = 0.4) | Our (T = 0.4) | | |
|
400 |
+
|
401 |
+
Table 4: Experimental results on cifar10, including other relevant baselines and sampling efficiency enhancements from the literature.
|
402 |
+
|
403 |
+
![11_image_0.png](11_image_0.png)
|
404 |
+
|
405 |
+
Sampling speed benefits are evident from Tables 3 and 4. When considering the sde version of the methods the number of sampling steps can decrease linearly with T, in accordance with theory (Kloeden & Platen, 1995), while retaining good bpd and fid scores. Similarly, although not in a linear fashion, the number of steps of the ode samplers can be reduced by using a smaller diffusion time T. Finally, we test the proposed methodology on the more challenging celeba 64x64 dataset. In this case, we use a variance exploding diffusion and we consider again Glow as the auxiliary model. The results, presented in Table 6, report the log-likelihood performance of different methods (qualitative results are reported in Appendix). On the two extremes of the complexity we have the original diffusion (VE, T = 1.0) with the best bpd and the highest complexity, and Glow which provides a much simpler scheme with worse performance. In the table we report the bpd and the nfe metrics for smaller diffusion times, in three different configurations:
|
406 |
+
naively neglecting the mismatch (ScoreSDE) or using the auxiliary model (Our). Interestingly, we found that the best results are obtained by using a combination of diffusion models pretrained for T = 1.0. The summary of the content of this table is the following: by accepting a small degradation in terms of bpd we can reduce the computational cost by almost one order of magnitude. We think it would be interesting to study more performing auxiliary models to improve performance of our method on challenging datasets.
|
407 |
+
|
408 |
+
## 5 Conclusion
|
409 |
+
|
410 |
+
Diffusion-based generative models emerged as an extremely competitive approach for a wide range of application domains. In practice, however, these models are resource hungry, both for their training and for sampling new data points. In this work, we have introduced the key idea of considering diffusion times T
|
411 |
+
as a free variable which should be chosen appropriately. We have shown that the choice of T introduces a trade-off, for which an optimal "sweet spot" exists. In standard diffusion-based models, smaller values of T
|
412 |
+
|
413 |
+
| bpd (↓) | nfe (↓) | |
|
414 |
+
|---------------------------------------|-----------|----|
|
415 |
+
| Model | (ode) | |
|
416 |
+
| ScoreSDE (Song et al., 2021c) | 2.13 | 68 |
|
417 |
+
| ScoreSDE (T = 0.5) | 8.06 | 15 |
|
418 |
+
| ScoreSDE (T = 0.2) | 12.1 | 9 |
|
419 |
+
| Our (T = 0.5) | 2.48 | 16 |
|
420 |
+
| Our (T = 0.2) | 2.58 | 9 |
|
421 |
+
| Our with pretrain diffusion (T = 0.5) | 2.36 | 16 |
|
422 |
+
| Our with pretrain diffusion (T = 0.2) | 2.32 | 9 |
|
423 |
+
| Glow (Kingma & Dhariwal, 2018) | 3.74 | 1 |
|
424 |
+
|
425 |
+
Table 5: Experimental results on CELEBA 64
|
426 |
+
are preferable for efficiency reasons, but sufficiently large T are required to reduce approximation errors of the forward dynamics. Thus, we devised a novel method that allows for an arbitrary selection of diffusion times, where even small values are allowed. Our method closes the gap between practical and ideal diffusion dynamics, using an auxiliary model. Our empirical validation indicated that the performance of our approach was comparable and often superior to standard diffusion models, while being efficient both in training and sampling. Limitations. In this work, the experimental protocol has been defined to corroborate our methodological contribution, and not to achieve state-of-the-art performance. A more extensive empirical evaluation of model architectures, sampling methods, and additional datasets could benefit practitioners in selecting an appropriate configuration of our method. An additional limitation is the descriptive, and not prescriptive, nature of Proposition 2: we know that T
|
427 |
+
⋆exists, but an explicit expression to identify the optimal diffusion is out of reach.
|
428 |
+
|
429 |
+
## Broader Impact Statement
|
430 |
+
|
431 |
+
We inherit the same ethical concerns of all generative models, as they could be used to produce fake or misleading information to the public.
|
432 |
+
|
433 |
+
## References
|
434 |
+
|
435 |
+
Brian DO Anderson. Reverse-Time Diffusion Equation Models. *Stochastic Processes and their Applications*, 12(3):
|
436 |
+
313–326, 1982.
|
437 |
+
|
438 |
+
Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. In *Advances in Neural Information Processing Systems*, volume 34, pp.
|
439 |
+
|
440 |
+
17981–17993. Curran Associates, Inc., 2021.
|
441 |
+
|
442 |
+
Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. *arXiv preprint arXiv:2201.06503*, 2022.
|
443 |
+
|
444 |
+
Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural Ordinary Differential Equations.
|
445 |
+
|
446 |
+
In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018.
|
447 |
+
|
448 |
+
Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. *arXiv preprint arXiv:2209.11215*, 2022.
|
449 |
+
|
450 |
+
Tianrong Chen, Guan-Horng Liu, and Evangelos A Theodorou. Likelihood training of schr\" odinger bridge using forward-backward sdes theory. 2021a.
|
451 |
+
|
452 |
+
Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrodinger bridge. *SIAM Review*, 63(2):249–313, 2021b.
|
453 |
+
|
454 |
+
Valentin De Bortoli. Convergence of denoising diffusion models under the manifold hypothesis. *arXiv preprint* arXiv:2208.05314, 2022.
|
455 |
+
|
456 |
+
Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling. In *Advances in Neural Information Processing Systems*, volume 34, pp. 17695–17709. Curran Associates, Inc., 2021.
|
457 |
+
|
458 |
+
Prafulla Dhariwal and Alexander Nichol. Diffusion Models Beat GANs on Image Synthesis. In *Advances in Neural* Information Processing Systems, volume 34, pp. 8780–8794. Curran Associates, Inc., 2021.
|
459 |
+
|
460 |
+
Tim Dockhorn, Arash Vahdat, and Karsten Kreis. Score-Based Generative Modeling with Critically-Damped Langevin Diffusion. In *International Conference on Learning Representations*, 2022.
|
461 |
+
|
462 |
+
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In *Advances in Neural Information Processing Systems*, volume 27.
|
463 |
+
|
464 |
+
Curran Associates, Inc., 2014.
|
465 |
+
|
466 |
+
Dilan Görür and Carl Edward Rasmussen. Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution.
|
467 |
+
|
468 |
+
Journal of Computer Science and Technology, 25(4):653–664, 2010.
|
469 |
+
|
470 |
+
Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, and David Duvenaud. Scalable Reversible Generative Models with Free-form Continuous Dynamics. In *International Conference on Learning Representations*, 2019.
|
471 |
+
|
472 |
+
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In *Advances in Neural* Information Processing Systems, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020.
|
473 |
+
|
474 |
+
Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans.
|
475 |
+
|
476 |
+
Autoregressive Diffusion Models. In *International Conference on Learning Representations*, 2022.
|
477 |
+
|
478 |
+
Chin-Wei Huang, Jae Hyun Lim, and Aaron C Courville. A Variational Perspective on Diffusion-Based Generative Models and Score Matching. In *Advances in Neural Information Processing Systems*, volume 34, pp. 22863–22876.
|
479 |
+
|
480 |
+
Curran Associates, Inc., 2021.
|
481 |
+
|
482 |
+
Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, and Ioannis Mitliagkas. Gotta Go Fast When Generating Data with Score-Based Models. *CoRR*, abs/2105.14080, 2021.
|
483 |
+
|
484 |
+
Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. *arXiv preprint arXiv:2206.00364*, 2022.
|
485 |
+
|
486 |
+
Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational Diffusion Models. In *Advances in Neural* Information Processing Systems, volume 34, pp. 21696–21707. Curran Associates, Inc., 2021.
|
487 |
+
|
488 |
+
Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In International Conference on Learning Representations, 2014.
|
489 |
+
|
490 |
+
Durk P Kingma and Prafulla Dhariwal. Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
|
491 |
+
|
492 |
+
Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved Variational Inference with Inverse Autoregressive Flow. In *Advances in Neural Information Processing Systems 29*, pp. 4743–4751. Curran Associates, Inc., 2016.
|
493 |
+
|
494 |
+
Peter E Kloeden and Eckhard Platen. Numerical solution of Stochastic Differential Equations. 1995.
|
495 |
+
|
496 |
+
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In *International Conference on Learning Representations*, 2021.
|
497 |
+
|
498 |
+
Tuomas Kynkäänniemi, Tero Karras, Miika Aittala, Timo Aila, and Jaakko Lehtinen. The Role of ImageNet Classes in Fréchet Inception Distance. *CoRR*, abs/2203.06026, 2022.
|
499 |
+
|
500 |
+
Holden Lee, Jianfeng Lu, and Yixin Tan. Convergence for score-based generative modeling with polynomial complexity.
|
501 |
+
|
502 |
+
arXiv preprint arXiv:2206.06227, 2022a.
|
503 |
+
|
504 |
+
Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, and Tie-Yan Liu. PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior.
|
505 |
+
|
506 |
+
In *International Conference on Learning Representations*, 2022b.
|
507 |
+
|
508 |
+
Alexander Quinn Nichol and Prafulla Dhariwal. Improved Denoising Diffusion Probabilistic Models. In International Conference on Machine Learning, volume 139, pp. 8162–8171. PMLR, 2021.
|
509 |
+
|
510 |
+
Carl Rasmussen. The Infinite Gaussian Mixture Model. In S. Solla, T. Leen, and K. Müller (eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 1999.
|
511 |
+
|
512 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition (CVPR), 2022.
|
513 |
+
|
514 |
+
Tim Salimans and Jonathan Ho. Progressive Distillation for Fast Sampling of Diffusion Models. In International Conference on Learning Representations, 2022.
|
515 |
+
|
516 |
+
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015.
|
517 |
+
|
518 |
+
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In *International Conference* on Learning Representations, 2021a.
|
519 |
+
|
520 |
+
Yang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
|
521 |
+
|
522 |
+
Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum Likelihood Training of Score-Based Diffusion Models. In *Advances in Neural Information Processing Systems*, volume 34, pp. 1415–1428. Curran Associates, Inc.,
|
523 |
+
2021b.
|
524 |
+
|
525 |
+
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. ScoreBased Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, 2021c.
|
526 |
+
|
527 |
+
Simo Särkkä and Arno Solin. *Applied Stochastic Differential Equations*. Institute of Mathematical Statistics Textbooks.
|
528 |
+
|
529 |
+
Cambridge University Press, 2019.
|
530 |
+
|
531 |
+
Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. In *Advances in Neural Information Processing Systems*, volume 34, pp.
|
532 |
+
|
533 |
+
24804–24816. Curran Associates, Inc., 2021.
|
534 |
+
|
535 |
+
Lucas Theis, Aäron van den Oord, and Matthias Bethge. A Note on the Evaluation of Generative Models. In Yoshua Bengio and Yann LeCun (eds.), *International Conference on Learning Representations*, 2016.
|
536 |
+
|
537 |
+
Ba-Hien Tran, Simone Rossi, Dimitrios Milios, Pietro Michiardi, Edwin V Bonilla, and Maurizio Filippone. Model selection for bayesian autoencoders. In *Advances in Neural Information Processing Systems*, volume 34, pp.
|
538 |
+
|
539 |
+
19730–19742. Curran Associates, Inc., 2021.
|
540 |
+
|
541 |
+
Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based Generative Modeling in Latent Space. In *Advances in* Neural Information Processing Systems, volume 34, pp. 11287–11302. Curran Associates, Inc., 2021.
|
542 |
+
|
543 |
+
Cédric Villani. *Optimal transport: old and new*, volume 338. Springer, 2009.
|
544 |
+
|
545 |
+
Daniel Watson, Jonathan Ho, Mohammad Norouzi, and William Chan. Learning to Efficiently Sample from Diffusion Probabilistic Models. *CoRR*, abs/2106.03802, 2021.
|
546 |
+
|
547 |
+
Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. In *International Conference on Learning Representations*, 2022.
|
548 |
+
|
549 |
+
Huangjie Zheng, Pengcheng He, Weizhu Chen, and Mingyuan Zhou. Truncated diffusion probabilistic models. *CoRR*,
|
550 |
+
abs/2202.09671, 2022.
|
551 |
+
|
552 |
+
## A Generic Definitions And Assumptions
|
553 |
+
|
554 |
+
Our work builds upon the work in Song et al. (2021b), which should be considered as a basis for the
|
555 |
+
developments hereafter. In this supplementary material we use the following shortened notation for a generic
|
556 |
+
ω > 0:
|
557 |
+
$${\mathcal{N}}_{\omega}(\mathbf{x})\ {\stackrel{\mathrm{def}}{=}}\ {\mathcal{N}}(\mathbf{x};\mathbf{0},\omega\mathbf{I}).$$
|
558 |
+
def = N (x; 0, ωI). (16)
|
559 |
+
It is useful to notice that ∇ log(Nω(x)) = −
|
560 |
+
1 ω x.
|
561 |
+
|
562 |
+
For an arbitrary probability density p(x) we define the convolution (∗ operator) with Nω using notation
|
563 |
+
|
564 |
+
$$(16)$$
|
565 |
+
$$p_{\omega}(\mathbf{x})=p(\mathbf{x})*{\mathcal{N}}_{\omega}(\mathbf{x}).$$
|
566 |
+
$$(17)$$
|
567 |
+
$$(18)$$
|
568 |
+
pω(x) = p(x) ∗ Nω(x). (17)
|
569 |
+
Equivalently, pω(x) = expω 2 ∆p(x), and consequently dpω(x)
|
570 |
+
dω =
|
571 |
+
1 2∆p(x), where ∆ = ∇⊤∇. Notice that by considering the Dirac delta function δ(x), we have the equality δω(x) = Nω(x).
|
572 |
+
|
573 |
+
In the following derivations, we make use of the Stam–Gross logarithmic Sobolev inequality result in (Villani, 2009, p. 562 Example 21.3):
|
574 |
+
|
575 |
+
$$\operatorname{KL}\left[p(\mathbf{x})\parallel\mathcal{N}_{\omega}(\mathbf{x})\right]=\int p(\mathbf{x})\log\left({\frac{p(\mathbf{x})}{\mathcal{N}_{\omega}(\mathbf{x})}}\right)\mathrm{d}\mathbf{x}\leq{\frac{\omega}{2}}\int\left\|\nabla\left(\log{\frac{p(\mathbf{x})}{\mathcal{N}_{\omega}(\mathbf{x})}}\right)\right\|^{2}p(\mathbf{x})\mathrm{d}\mathbf{x}.$$
|
576 |
+
|
577 |
+
## B Deriving Equation (4) From Huang Et Al. **(2021)**
|
578 |
+
|
579 |
+
We start with Eq. (25) of Huang et al. (2021) which, in our notation, reads
|
580 |
+
|
581 |
+
$$\log g(\mathbf{x},T)\geq\mathbb{E}\left[\log p_{\mathbf{x}_{0}}(\mathbf{x}_{T})\mid\mathbf{x}_{0}=\mathbf{x}\right]-\int_{0}^{T}\mathbb{E}\left[\frac{1}{2}g^{2}(t)\|\mathbf{x}_{0}(\mathbf{x}_{t})\|^{2}+\nabla^{\top}\left(g^{2}(t)\mathbf{x}_{0}(\mathbf{x}_{t})-\mathbf{f}(\mathbf{x}_{t},t)\right)\mid\mathbf{x}_{0}=\mathbf{x}\right]\mathrm{d}t.$$
|
582 |
+
|
583 |
+
The first step is to take the expected value w.r.t x0 ∼ p*data* on both sides of the above inequality
|
584 |
+
|
585 |
+
$$\mathbb{E}_{p_{\text{data}}}\left[\log q(\mathbf{x},T)\right]\geq\mathbb{E}\left[\log p_{\text{data}}(\mathbf{x}_{T})\right]-\int_{0}^{T}\mathbb{E}\left[\frac{1}{2}g^{2}(t)\|\mathbf{s_{\theta}}(\mathbf{x}_{t})\|^{2}+\nabla^{\top}\left(g^{2}(t)\mathbf{s_{\theta}}(\mathbf{x}_{t})-\mathbf{f}(\mathbf{x}_{t},t)\right)\right]\text{d}t.$$
|
586 |
+
|
587 |
+
We focus on rewriting the term
|
588 |
+
|
589 |
+
Z T
|
590 |
+
0
|
591 |
+
E
|
592 |
+
-∇⊤g
|
593 |
+
2(t)sθ(xt) − f(xt, t) dt =
|
594 |
+
Z T
|
595 |
+
0
|
596 |
+
p(x, t|x0)p*data* (x0)∇⊤g
|
597 |
+
2(t)sθ(x) − f(x, t)dxdx0dt =
|
598 |
+
−
|
599 |
+
Z T
|
600 |
+
0
|
601 |
+
∇⊤ (p(x, t|x0)p*data* (x0)) g
|
602 |
+
2(t)sθ(x) − f(x, t)dxdx0dt =
|
603 |
+
−
|
604 |
+
Z T
|
605 |
+
0
|
606 |
+
(p(x, t|x0)pdata (x0))−1 ∇⊤ (p(x, t|x0)p*data* (x0)) g
|
607 |
+
2(t)sθ(x) − f(x, t)(p(x, t|x0)p*data* (x0)) dxdx0dt =
|
608 |
+
−
|
609 |
+
Z T
|
610 |
+
0
|
611 |
+
∇⊤ (log(p(x, t|x0)) + log(p*data* (x0))) g
|
612 |
+
2(t)sθ(x) − f(x, t)(p(x, t|x0)p*data* (x0)) dxdx0dt =
|
613 |
+
−
|
614 |
+
Z T
|
615 |
+
0
|
616 |
+
∇⊤ (log(p(x, t|x0))) g
|
617 |
+
2(t)sθ(x) − f(x, t)(p(x, t|x0)p*data* (x0)) dxdx0dt =
|
618 |
+
−
|
619 |
+
Z T
|
620 |
+
0
|
621 |
+
E
|
622 |
+
-∇⊤ (log(p(xt, t|x0))) g
|
623 |
+
2(t)sθ(xt) − f(xt, t) dt.
|
624 |
+
$$(19)$$
|
625 |
+
Consequently, we can rewrite the r.h.s of Equation (19) as
|
626 |
+
|
627 |
+
E [log pnoise (xT )] − Z T 0 E 1 2 g 2(t)∥sθ(xt)∥ 2 − g 2(t)∇⊤ (log(p(xt, t|x0))) sθ(x) + g 2(t)∇⊤ (log(p(xt, t|x0))) f(x, t) dt = E [log pnoise (xT )] − Z T 0 E 1 2 g 2(t)∥sθ(xt) − ∇ (log(p(xt, t|x0)))∥ 2dt− 1 2 Z T 0 E -g 2(t)∥∇ (log(p(xt, t|x0)))∥ + 2∇⊤ (log(p(xt, t|x0))) f(x, t)dt,
|
628 |
+
|
629 |
+
that is exactly Equation (4).
|
630 |
+
|
631 |
+
## C Proof Of **Eq. (5)**
|
632 |
+
|
633 |
+
We prove the following result
|
634 |
+
|
635 |
+
$$I(\mathbf{s_{\theta}},T)\geq\underbrace{I(\mathbf{\nabla}\log p,T)}_{\triangleq I_{K(T)}}=\frac{1}{2}\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(1)}\left[\left\|\mathbf{\nabla}\log p(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\,|\,\mathbf{x}_{0})\right\|\right]^{2}\mathrm{d}t.$$
|
636 |
+
|
637 |
+
Proof. We prove that for generic positive λ(·), and T2 > T1 the following holds:
|
638 |
+
|
639 |
+
$$\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(\cdot)}\left[||\mathbf{s}(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})||^{2}\right]\mathrm{d}t\geq\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(\cdot)}\left[||\nabla\log p(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})||^{2}\right]\mathrm{d}t.\tag{20}$$
|
640 |
+
|
641 |
+
First we compute the functional derivative (w.r.t s)
|
642 |
+
|
643 |
+
$$\frac{\delta}{\delta\sigma}\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(1)}\left[\left|s(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right|\right|^{2}\right]\mathrm{d}t=2\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(1)}\left[\left(s(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right)\right]\mathrm{d}t=2\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathbb{E}_{\sim(1)}\left[\left(s(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t)\right)\right]\mathrm{d}t,$$
|
644 |
+
|
645 |
+
where we used
|
646 |
+
|
647 |
+
$$\begin{array}{l}{{\operatorname{E}_{\sim(1)}\left[\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right]=\int\nabla\log p(\mathbf{x},t|\mathbf{x}_{0})p(\mathbf{x},t|\mathbf{x}_{0})p_{\mathrm{data}}\left(\mathbf{x}_{0}\right)\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{x}_{0}=}}\\ {{\quad\quad\int\nabla p(\mathbf{x},t|\mathbf{x}_{0})p_{\mathrm{data}}\left(\mathbf{x}_{0}\right)\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{x}_{0}=\int\nabla p(\mathbf{x},t)\mathrm{d}\mathbf{x}=\operatorname{E}_{\sim(1)}\left[\nabla\log p(\mathbf{x}_{t},t)\right].}}\end{array}$$
|
648 |
+
|
649 |
+
Consequently we can obtain the optimal s through
|
650 |
+
|
651 |
+
$$\frac{\delta}{\delta\mathbf{s}}\int\limits_{t=T_{1}}^{T_{2}}\lambda(t)\mathrm{E}_{\sim(\cdot)}\left[\left|\mathbf{s}(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})\right|\right|^{2}\right]\mathrm{d}t=0\to\mathbf{s}(\mathbf{x},t)=\nabla\log p(\mathbf{x},t).\tag{21}$$
|
652 |
+
|
653 |
+
Substitution of this result into Eq. (20) directly proves the desired inequality.
|
654 |
+
|
655 |
+
As a byproduct, we prove the correctness of Eq. (5), since it is a particular case of Eq. (20), with λ =
|
656 |
+
g 2, T1 = 0, T2 = T. Since K(T) is a minimum, the decomposition I(sθ, T) = K(T) + G(sθ, T) implies K(T) + G(sθ, T) ≥ K(T) → G(sθ, T) ≥ 0.
|
657 |
+
|
658 |
+
## D Proof Of **Proposition 1**
|
659 |
+
|
660 |
+
Proposition 1. Given the stochastic dynamics defined in Eq. (1)*, it holds that*
|
661 |
+
|
662 |
+
$$\mathbb{E}_{\sim(\mathbb{1})}\log p(\mathbf{x}_{T},T)-K(T)+R(T)=\mathbb{E}_{p_{\mathrm{data}}(\mathbf{x})}\log p_{\mathrm{data}}(\mathbf{x}).$$
|
663 |
+
E∼(1)log p(xT , T) − K(T) + R(T) = Ep*data*(x)log p*data*(x). (8)
|
664 |
+
Proof. We consider the pair of equations
|
665 |
+
|
666 |
+
$$\begin{array}{l}{\rm d}\mathbf{x}_{t}=\left[-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log q(\mathbf{x}_{t},t)\right]{\rm d}t+g(t^{\prime}){\rm d}\mathbf{w}(t),\\ {\rm d}\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t){\rm d}t+g(t){\rm d}\mathbf{w}(t),\end{array}\tag{22}$$
|
667 |
+
|
668 |
+
where t
|
669 |
+
′ = T − t, q is the density of the backward process and p is the density of the forward process. These equations can be interpreted as a particular case of the following pair of sdes (corresponding to Huang et al.
|
670 |
+
|
671 |
+
(2021) eqn (4) and (17)1).
|
672 |
+
|
673 |
+
$$\mathrm{d}\mathbf{x}_{t}=\underbrace{\left[-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log q(\mathbf{x}_{t},t)\right]}_{\mathbf{\mu}(\mathbf{x}_{t},t)}\mathrm{d}t+\underbrace{g(t^{\prime})}_{\sigma(t)}\mathrm{d}\mathbf{w}(t),$$ $$\mathrm{d}\mathbf{x}_{t}=\left[\underbrace{\mathbf{f}(\mathbf{x}_{t},t)-g^{2}(t)\mathbf{\nabla}\log q(\mathbf{x}_{t},t^{\prime})}_{-\mathbf{\mu}(\mathbf{x}_{t},t^{\prime})}+\underbrace{g(t)}_{\sigma(t^{\prime})}\mathbf{a}(\mathbf{x}_{t},t)\right]\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}(t),\tag{10}$$ \[\mathbf{\mu}\
|
674 |
+
$$(8)$$
|
675 |
+
|
676 |
+
where Eq. (22) is recovered considering a(x, t) = σ(t
|
677 |
+
′)∇ log q(x, t′) = g(t)∇ log q(x, t′). Eq. (23) is associated to an elbo (Huang et al. (2021), Thm 3) that is attained with equality if and only if a(x, t) = σ(t
|
678 |
+
′)∇ log q(x, t′). Consequently, we can write the following equality associated to the backward process of Eq. (22)
|
679 |
+
|
680 |
+
$$\log q(\mathbf{x},T)=\mathbb{E}\left[-\frac{1}{2}\int\limits_{0}^{T}||\mathbf{a}(\mathbf{x}_{t},t)||^{2}+2\nabla^{\top}\mathbf{\mu}(\mathbf{x}_{t},t^{\prime})ds+\log q(\mathbf{x}_{T},0)\quad\left|\mathbf{x}_{0}=\mathbf{x}\right|\right],\tag{1}$$
|
681 |
+
$$(23)$$
|
682 |
+
$$(24)$$
|
683 |
+
|
684 |
+
where expected value is taken w.r.t. dynamics of the associated forward process.
|
685 |
+
|
686 |
+
By careful inspection of the couple of equations we notice that in the process xt the drift includes the
|
687 |
+
∇ log q(xt, t) term, while in our main (1) we have ∇ log p(xt, t′). In general the two vector fields do not agree. However, if we select as starting distribution of the generating process p(x, T), i.e. q(x, 0) = p(x, T),
|
688 |
+
then ∀t, q(x, t) = p(x, t′).
|
689 |
+
|
690 |
+
Given initial conditions, the time evolution of the density p is fully described by the Fokker-Planck equation
|
691 |
+
|
692 |
+
$$\frac{d}{dt}p(\mathbf{x},t)=-\nabla^{\top}\left(\mathbf{f}(\mathbf{x},t)p(\mathbf{x},t)\right)+\frac{g^{2}(t)}{2}\Delta(p(\mathbf{x},t)),\quad p(\mathbf{x},0)=p_{\mathsf{data}}\left(\mathbf{x}\right).\tag{25}$$ In fact,
|
693 |
+
Similarly, for the density q,
|
694 |
+
$$\frac{d}{dt}q(\mathbf{x},t)=-\mathbf{\nabla}^{\top}\left(-\mathbf{f}(\mathbf{x},t^{\prime})q(\mathbf{x},t)+g^{2}(t^{\prime})\mathbf{\nabla}\log q(\mathbf{x},t)q(\mathbf{x},t)\right)+\frac{g^{2}(t^{\prime})}{2}\Delta(q(\mathbf{x},t)),\quad q(\mathbf{x},0)=p(\mathbf{x},T).\tag{26}$$
|
695 |
+
|
696 |
+
By Taylor expansion we have
|
697 |
+
|
698 |
+
$$q(\mathbf{x},\delta t)=q(\mathbf{x},0)+\delta t\left(\frac{d}{\mathrm{d}t}q(\mathbf{x},t)\right)_{t=0}+\mathcal{O}(\delta t^{2})=$$ $$q(\mathbf{x},0)+\delta t\left(-\nabla^{\top}\left(-\mathbf{f}(\mathbf{x},T)q(\mathbf{x},0)+g^{2}(T)\nabla\log q(\mathbf{x},0)q(\mathbf{x},0)\right)+\frac{g^{2}(T)}{2}\Delta(q(\mathbf{x},0))\right)+\mathcal{O}(\delta t^{2})=$$ $$q(\mathbf{x},0)+\delta t\left(\nabla^{\top}\left(\mathbf{f}(\mathbf{x},T)q(\mathbf{x},0)\right)-\frac{g^{2}(T)}{2}\Delta(q(\mathbf{x},0))\right)+\mathcal{O}(\delta t^{2}),$$
|
699 |
+
|
700 |
+
and
|
701 |
+
|
702 |
+
$$\begin{array}{l}{{p(\mathbf{x},T-\delta t)=p(\mathbf{x},T)-\delta t\left(\frac{d}{d t}p(\mathbf{x},t)\right)_{t=T}+\mathcal{O}(\delta t^{2})=}}\\ {{p(\mathbf{x},T)-\delta t\left(-\nabla^{\top}\left(\mathbf{f}(\mathbf{x},T)p(\mathbf{x},T)\right)+\frac{g^{2}(T)}{2}\Delta(p(\mathbf{x},T))\right)+\mathcal{O}(\delta t^{2})=}}\\ {{p(\mathbf{x},T)+\delta t\left(\nabla^{\top}\left(\mathbf{f}(\mathbf{x},T)p(\mathbf{x},T)\right)-\frac{g^{2}(T)}{2}\Delta(p(\mathbf{x},T))\right)+\mathcal{O}(\delta t^{2})}}\end{array}$$
|
703 |
+
|
704 |
+
Since q(x, 0) = p(x, T), we finally have q(x, δt) − p(x, T − δt) = O(δt2). This holds for arbitrarily small δt.
|
705 |
+
|
706 |
+
By induction, with similar reasoning, we claim that q(x, t) = p(x, t′).
|
707 |
+
|
708 |
+
This last result allows us to rewrite Eq. (22) as the pair of sdes
|
709 |
+
|
710 |
+
$$\begin{array}{l}{\rm d}\mathbf{x}_{t}=\left[\,-\mathbf{f}(\mathbf{x}_{t},t^{\prime})+g^{2}(t^{\prime})\mathbf{\nabla}\log p(\mathbf{x}_{t},t^{\prime})\right]{\rm d}t+g(t^{\prime}){\rm d}\mathbf{w}(t),\\ {\rm d}\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t){\rm d}t+g(t){\rm d}\mathbf{w}(t).\end{array}\tag{27}$$
|
711 |
+
|
712 |
+
Moreover, since q(x, T) = p(x, 0) = p*data* (x), together with the result Eq. (24), we have the following equality
|
713 |
+
|
714 |
+
$$\log p_{\mathrm{data}}(\mathbf{x})=\mathbb{E}\left[-1\right]$$
|
715 |
+
$$\frac{1}{2}\int\limits_{0}^{T}||\mathbf{a}(\mathbf{x}_{t},t)||^{2}+2\nabla^{\top}\mathbf{\mu}(\mathbf{x}_{t},t^{\prime})\mbox{d}t+\log p(\mathbf{x}_{T},T)\quad|\mathbf{x}_{0}=\mathbf{x}\Bigg{]}\,.\tag{28}$$
|
716 |
+
|
717 |
+
Consequently
|
718 |
+
|
719 |
+
− 1 2 Z T Ex∼pdata [log pdata (x)] = E [log p(xT , T)] + E 0 ||a(xt, t)||2 + 2∇⊤µ(xt, t′)dt = E [log p(xT , T)] + E − 1 2 Z T 0 g(t) 2||∇ log p(xt, t)||2 + 2∇⊤−f(xt, t) + g 2(t)∇ log p(xt, t)dt = E [log p(xT , T)] + E − 1 2 Z T 0 g(t) 2||∇ log p(xt, t)||2 − 2g 2(t)∇⊤ x log p(xt, t)∇ log p(xt, t|x0)dt + E − 1 2 Z T 0 2f ⊤(xt, t)∇ log p(xt, t|x0)dt = E [log p(xT , T)] + E − 1 2 Z T 0 g(t) 2||∇ log p(xt, t) − ∇ log p(xt, t|x0)||2dt + − 1 2 Z T E 0 −g(t) 2||∇ log p(xt, t|x0)||2 + 2f ⊤(xt, t)∇ log p(xt, t|x0)dt .
|
720 |
+
|
721 |
+
Remembering the definitions
|
722 |
+
|
723 |
+
$$\begin{array}{l}{{K(T)=\frac{1}{2}\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(1)}\left[||\nabla\log p(\mathbf{x}_{t},t)-\nabla\log p(\mathbf{x}_{t},t|\mathbf{x}_{0})||\right]^{2}\mathrm{d}t}}\\ {{R(T)=\frac{1}{2}\int\limits_{t=0}^{T}\mathbb{E}_{\sim(1)}\left[g^{2}(t)||\mathbf{\nabla}\log p(\mathbf{x},t\,|\,\mathbf{x}_{0})||\right]^{2}-2\mathbf{f}^{\top}(\mathbf{x},t)\mathbf{\nabla}\log p(\mathbf{x},t\,|\,\mathbf{x}_{0})\mathrm{d}t,}}\end{array}$$
|
724 |
+
|
725 |
+
we finally conclude the proof that
|
726 |
+
|
727 |
+
$$\mathbb{E}_{\sim(1)}[\log p(\mathbf{x}_{T},T)]-K(T)+R(T)=\mathbb{E}_{\mathbf{x}\sim p_{\texttt{data}}}\left[\log p_{\texttt{data}}(\mathbf{x})\right].\tag{1}$$
|
728 |
+
$$(29)$$
|
729 |
+
|
730 |
+
## E Proof Of **Lemma 1**
|
731 |
+
|
732 |
+
In this Section we prove the validity of Lemma 1 for the case of Variance Preserving (VP) and Variance Exploding (VE) sdes. Remember, as reported also in main Table 1, that the above mentioned classes correspond to α(t) = −
|
733 |
+
1 2 β(t), g(t) = pβ(t), β(t) = β0 + (β1 − β0)t and α(t) = 0, g(t) = qdσ2(t)
|
734 |
+
dt, σ2(t) =
|
735 |
+
σmax σmin trespectively.
|
736 |
+
|
737 |
+
Lemma 1. For the classes of sdes considered (Table 1), the discrepancy between p(x, T) and the p*noise*(x)
|
738 |
+
can be bounded as follows.
|
739 |
+
|
740 |
+
For Variance Preserving sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C1 exp−R T
|
741 |
+
0 β(t)dt.
|
742 |
+
|
743 |
+
For Variance Exploding sdes, it holds that: kl [p(x, T) ∥ p*noise*(x)] ≤ C21 σ2(T)−σ2(0) .
|
744 |
+
|
745 |
+
## E.1 The Variance Preserving (Vp) Convergence
|
746 |
+
|
747 |
+
We associate this class of sdes to the Fokker Planck operator
|
748 |
+
|
749 |
+
$${\mathcal{L}}^{\dagger}(t)={\frac{1}{2}}\beta(t)\nabla^{\top}\left(\mathbf{x}\cdot+\nabla(\cdot)\right),$$
|
750 |
+
|
751 |
+
$$(30)$$
|
752 |
+
β(t)∇⊤ (x · +∇(·)), (30)
|
753 |
+
and consequently dp(x,t)
|
754 |
+
dt = L
|
755 |
+
†(t)p(x, t). Simple calculations show that lim T→∞
|
756 |
+
p(x, T) = N1(x).
|
757 |
+
|
758 |
+
We compute bound the time derivative of the kl term as
|
759 |
+
|
760 |
+
d dt kl [p(x, T) ∥ N1(x)] = Zdp(x, t) dtlogp(x, t) N1(x) dx + Zp(x, t) p(x, t) dp(x, t) dtdx = 1 2 β(t) Z∇⊤ (−∇ log(N1(x))p(x, t)) + ∇p(x, t))) logp(x, t) N1(x) dx = − 1 2 β(t) Zp(x, t) (−∇ log(N1(x)) + ∇ log p(x, t)))⊤ ∇(logp(x, t) N1(x) )dx = − 1 2 β(t) Zp(x, t)∇(logp(x, t) N1(x) ) ⊤∇(logp(x, t) N1(x) )dx = − 1 2 β(t) Zp(x, t)||∇(logp(x, t) N1(x) )||2dx ≤ −β(t)kl [p(x, T) ∥ N1(x)] . (31)
|
761 |
+
We then apply Gronwall's inequality (Villani, 2009) to d dtkl [p(x, T) ∥ N1(x)] ≤ −β(t)kl [p(x, T) ∥ N1(x)]
|
762 |
+
to claim
|
763 |
+
|
764 |
+
$$\operatorname{KL}\left[p(\mathbf{x},T)\parallel{\mathcal{N}}_{1}(\mathbf{x})\right]\leq\operatorname{KL}\left[p(\mathbf{x},0)\parallel{\mathcal{N}}_{1}(\mathbf{x})\right]\exp\Biggl(-\int_{0}^{T}\beta(s)d s\Biggr).$$
|
765 |
+
|
766 |
+
To claim validity of the result, we need to assume that p(x, t) has finite first and second order derivatives, and that kl [p(x, 0) ∥ N1(x)] < ∞.
|
767 |
+
|
768 |
+
$$(32)$$
|
769 |
+
|
770 |
+
## E.2 The Variance Exploding (Ve) Convergence
|
771 |
+
|
772 |
+
The first step is to bound the derivative w.r.t to ω of the divergence kl [pω(x) ∥ Nω(x)], i.e.
|
773 |
+
|
774 |
+
d
|
775 |
+
dωkl [pω(x) ∥ Nω(x)] = Zdpω(x)
|
776 |
+
dω logpω(x)
|
777 |
+
Nω(x)
|
778 |
+
dx +
|
779 |
+
Zpω(x)
|
780 |
+
pω(x)
|
781 |
+
dpω(x)
|
782 |
+
dω dx −
|
783 |
+
Zpω(x)
|
784 |
+
Nω(x)
|
785 |
+
dNω(x)
|
786 |
+
dω dx =
|
787 |
+
1
|
788 |
+
2
|
789 |
+
Z(∆pω(x)) logpω(x)
|
790 |
+
Nω(x)
|
791 |
+
− (∆Nω(x)) pω(x)
|
792 |
+
Nω(x)
|
793 |
+
dx =
|
794 |
+
1
|
795 |
+
2
|
796 |
+
Z∇⊤ (pω(x)∇ log pω(x)) logpω(x)
|
797 |
+
Nω(x)
|
798 |
+
− ∇⊤ (Nω(x)∇ log Nω(x)) pω(x)
|
799 |
+
Nω(x)
|
800 |
+
dx =
|
801 |
+
−
|
802 |
+
1
|
803 |
+
2
|
804 |
+
Z(pω(x)∇ log pω(x))⊤ ∇(logpω(x)
|
805 |
+
Nω(x)
|
806 |
+
) − (Nω(x)∇ log Nω(x))⊤ ∇(
|
807 |
+
pω(x)
|
808 |
+
Nω(x)
|
809 |
+
)dx =
|
810 |
+
−
|
811 |
+
1
|
812 |
+
2
|
813 |
+
Z(pω(x)∇ log pω(x))⊤ ∇(logpω(x)
|
814 |
+
Nω(x)
|
815 |
+
) − (pω(x)∇ log Nω(x))⊤ ∇(logpω(x)
|
816 |
+
Nω(x)
|
817 |
+
)dx =
|
818 |
+
−
|
819 |
+
1
|
820 |
+
2
|
821 |
+
Zpω(x)||∇(logpω(x)
|
822 |
+
Nω(x)
|
823 |
+
)||2dx ≤ −
|
824 |
+
1
|
825 |
+
ω
|
826 |
+
kl [pω(x) ∥ Nω(x)] . (33)
|
827 |
+
Consequently, using again Gronwall inequality, for all ω1 > ω0 > 0 we have
|
828 |
+
$\mathrm{KL}\left[p_{\omega_{1}}(\mathbf{x})\parallel\mathcal{N}_{\omega_{1}}(\mathbf{x})\right]\leq\mathrm{KL}\left[p_{\omega_{0}}(\mathbf{x})\parallel\mathcal{N}_{\omega_{0}}(\mathbf{x})\right]\exp(-(\log(\omega_{1})-\log(\omega_{0})))=\mathrm{KL}\left[p_{\omega_{0}}(\mathbf{x})\parallel\mathcal{N}_{\omega_{0}}(\mathbf{x})\right]\omega_{0}\frac{1}{\omega_{1}}.$
|
829 |
+
This can be directly applied to obtain the bound for VE sde. Consider ω1 = σ 2(T) − σ 2(0) and ω0 =
|
830 |
+
σ 2(τ ) − σ 2(0) for an arbitrarily small *τ < T*. Then, since for the considered class of variance exploding sde we have p(x, T) = pσ2(T)−σ2(0)(x)
|
831 |
+
|
832 |
+
$${\rm KL}\left[p(\mathbf{x},T)\parallel{\cal N}_{\sigma^{2}(T)-\sigma^{2}(0)}(\mathbf{x})\right]\leq C\frac{1}{\sigma^{2}(T)-\sigma^{2}(0)}\tag{34}$$
|
833 |
+
|
834 |
+
where C = kl -p(x, τ ) ∥ Nσ2(τ)−σ2(0)(x)(σ 2(τ ) − σ 2(0)).
|
835 |
+
|
836 |
+
Similarly to the previous case, we assume that p(x, t) has finite first and second order derivatives, and that C < ∞.
|
837 |
+
|
838 |
+
## F Proof Of **Lemma 2**
|
839 |
+
|
840 |
+
Lemma 2. The optimal score gap term G(bsθ, T) is a non-decreasing function in T. That is, given T2 > T1, and θ1 = arg minθ I(sθ, T1), θ2 = arg minθ I(sθ, T2)*, then* G(sθ2
|
841 |
+
, T2) ≥ G(sθ1
|
842 |
+
, T1).
|
843 |
+
|
844 |
+
Proof. For θ1 defined as in the lemma, I(sθ1
|
845 |
+
, T1) = K(T1) + G(sθ1
|
846 |
+
, T1). Next, select T2 > T1. Then, for a generic θ, including θ2,
|
847 |
+
|
848 |
+
I(sθ, T2) = Z T1 t=0 g 2(t)E∼(1) -||sθ(xt, t) − ∇ log p(xt, t|x0)||2dt + | {z } =I(sθ,T1)≥K(T1)+G(sθ1 ,T1)=I(sθ1 ,T1) Z T2 t=T1 g 2(t)E∼(1) -||sθ(xt, t) − ∇ log p(xt, t|x0)||2dt t=T1 g 2(t)E∼(1)[||∇ log p(xt,t)−∇ log p(xt,t|x0)||2]dt=K(T2)−K(T1) ≥ G(sθ1 , T1) + K(T2), | {z } ≥R T2
|
849 |
+
from which $\mathcal{G}(\mathbf{s_{\theta}},T_{2})=I(\mathbf{s_{\theta}},T_{2})-K(T_{2})\geq\mathcal{G}(\mathbf{s_{\theta_{1}}},T_{1})$.
|
850 |
+
|
851 |
+
## G Proof Of **Proposition 2**
|
852 |
+
|
853 |
+
Proposition 2. Consider the elbo decomposition in Eq. (9). We study it as a function of time T, and seek its optimal argument T
|
854 |
+
⋆ = arg maxT Lelbo(bsθ, T)*. Then, the optimal diffusion time* T
|
855 |
+
⋆ ∈ R
|
856 |
+
+, and thus not necessarily T
|
857 |
+
⋆ = ∞. Additional assumptions on the gap term G(·) *can be used to guarantee strict finiteness* of T
|
858 |
+
⋆. There exists at least one optimal diffusion *time* T
|
859 |
+
⋆in the interval [0, ∞], which maximizes the elbo, that is T
|
860 |
+
⋆ = arg maxT Lelbo(bsθ, T).
|
861 |
+
|
862 |
+
Proof. It is trivial to verify that since the optimal gap term G(bsθ, T) is an increasing function in T Lemma 2, then ∂G
|
863 |
+
∂T ≥ 0.Then, we study the sign of the kl derivative, which is always negative as shown by Eq. (31) and Eq. (33) (where we also notice d dt =
|
864 |
+
dω dt d dω keep the sign). Moreover, we know that that lim T→∞
|
865 |
+
∂kl
|
866 |
+
∂T = 0. Then, the function ∂Lelbo
|
867 |
+
∂T =
|
868 |
+
∂G
|
869 |
+
∂T +
|
870 |
+
∂kl
|
871 |
+
∂T has at least one zero in [0, ∞].
|
872 |
+
|
873 |
+
## H Optimization Of T ⋆
|
874 |
+
|
875 |
+
It is possible to treat the diffusion time T as an hyper-parameter and perform gradient based optimization jointly with the score model parameters θ. Indeed, simple calculations show that
|
876 |
+
|
877 |
+
$$\frac{\partial\mathcal{L}_{\texttt{EED}}(\mathbf{s_{\theta}},T)}{\partial T}=\mathbb{E}\left[\left(\mathbf{f}^{\top}(\mathbf{x}_{T},T)\mathbf{\nabla}+g^{2}(T)\Delta\right)\log p_{\texttt{Bres}}(\mathbf{x}_{T})\right]+$$ $$-\frac{1}{2}\mathbb{E}\left[\left\|\mathbf{s_{\theta}}(\mathbf{x}_{T},T)-\mathbf{\nabla}\log p(\mathbf{x}_{T},T\,|\,\mathbf{x}_{0})\right\|^{2}\right]+$$ $$\frac{1}{2}\mathbb{E}\left[g^{2}(T)\|\mathbf{\nabla}\log p(\mathbf{x}_{T},T\,|\,\mathbf{x}_{0})\right\|^{2}-2\mathbf{f}^{\top}(\mathbf{x}_{T},T)\mathbf{\nabla}\log p(\mathbf{x}_{T},T\,|\,\mathbf{x}_{0})\right]$$
|
878 |
+
$$\begin{array}{l}{(35)}\\ {\qquad(36)}\end{array}$$
|
879 |
+
$$(37)$$
|
880 |
+
|
881 |
+
## I Proof Of **Proposition 4**
|
882 |
+
|
883 |
+
Proposition 4. *Given the existence of* T
|
884 |
+
⋆, defined as the diffusion time such that the elbo *is maximized*
|
885 |
+
(Proposition 2), there exists at least one diffusion time τ ≤ T
|
886 |
+
⋆*, such that* L
|
887 |
+
ϕ
|
888 |
+
⋆
|
889 |
+
elbo(bsθ, τ ) ≥ Lelbo(bsθ, T ∗).
|
890 |
+
|
891 |
+
Proof. Since ∀T we have L
|
892 |
+
ϕ elbo(sθ, T) ≥ Lelbo(sθ, T), there exists a countable set of intervals I contained in [0, T ⋆] of variable supports, where L
|
893 |
+
ϕ elbo is greater than Lelbo(sθ, T). Assuming continuity of L
|
894 |
+
ϕ elbo, in these intervals is possible to find at least one τ ≤ T
|
895 |
+
⋆ where L
|
896 |
+
ϕ
|
897 |
+
⋆
|
898 |
+
elbo(bsθ, τ ) ≥ Lelbo(bsθ, T ∗).
|
899 |
+
|
900 |
+
We notice that the degenerate case I = T
|
901 |
+
⋆is obtained only when ∀T ≤ T
|
902 |
+
⋆,kl [p(x, T) ∥ νϕ∗ (x)] =
|
903 |
+
kl [p(x, T) ∥ p*noise* (x)]. We expect this condition to never occur in practice.
|
904 |
+
|
905 |
+
## J Invariance To Noise Schedule
|
906 |
+
|
907 |
+
We here discuss about the claims made in § 2.4 about the invariance of the elbo to the particular choice of noise schedule. First in Appendix J.1 we explain how different sdes corresponding to different noise schedules can be translated one into the other. We introduce the concept of signal-to-noise ratio (snr). We clarify the unified score parametrization used in practice in the literature Karras et al. (2022); Kingma et al. (2021).
|
908 |
+
|
909 |
+
Then, in Appendix J.2, we prove how the single elements of the elbo depend only on the value of the snr at the final diffusion time T, as claimed in the main paper.
|
910 |
+
|
911 |
+
## J.1 Preliminaries
|
912 |
+
|
913 |
+
We consider as reference sde a pure Wiener process diffusion,
|
914 |
+
|
915 |
+
$$\mathrm{d}\mathbf{x}_{t}=\mathrm{d}\mathbf{w}_{t}\quad\mathrm{with}\quad\mathbf{x}_{0}\sim p_{\mathrm{data}}\ ,$$
|
916 |
+
dxt = dwt with x0 ∼ p*data* , (38)
|
917 |
+
It is easily seen that the solution of the random process admits representation
|
918 |
+
|
919 |
+
$${\mathbf{x}}_{t}={\mathbf{x}}_{0}+{\sqrt{t}}{\boldsymbol{\epsilon}},\quad{\boldsymbol{\epsilon}}\sim{\mathcal{N}}({\mathbf{0}},{\boldsymbol{I}})$$
|
920 |
+
$$(39)$$
|
921 |
+
$$(40)$$
|
922 |
+
$$(41)$$
|
923 |
+
√tϵ, ϵ ∼ N (0, I) (39)
|
924 |
+
In this case the time varying probability density, that we indicate with ψ, satisfies
|
925 |
+
|
926 |
+
$$\psi(\mathbf{x},t)=\exp\biggl({\frac{t}{2}}\Delta\biggr)p_{\texttt{data}}(\mathbf{x}),\quad\psi(\mathbf{x},t\,|\,\mathbf{x}_{0})=\exp\biggl({\frac{t}{2}}\Delta\biggr)\delta(\mathbf{x}-\mathbf{x}_{0})$$
|
927 |
+
|
928 |
+
Simple calculations show that
|
929 |
+
|
930 |
+
$$\nabla\log\psi(\mathbf{x},\sigma^{2})={\frac{\mathbb{E}[\mathbf{x}_{0}\,|\,\mathbf{x}_{0}+\sigma\epsilon=\mathbf{x}]-\mathbf{x}}{\sigma^{2}}}\doteq{\frac{d(\mathbf{x};\sigma^{2})-\mathbf{x}}{\sigma^{2}}},$$
|
931 |
+
|
932 |
+
where again x0 ∼ p*data* and the function d can be interpreted as a *denoiser*.
|
933 |
+
|
934 |
+
Our goal is to show the relationship between equations like Equation (1), and Equation (38). In particular,
|
935 |
+
we focus on *affine* sdes, as classically done with Diffusion models. The class of considered affine sdes is the
|
936 |
+
following:
|
937 |
+
$$\mathrm{d}\mathbf{x}_{t}=\alpha(t)\mathbf{x}_{t}\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}_{t}\quad\mathrm{with}\quad\mathbf{x}_{0}\sim p_{\mathrm{data}}\ ,$$
|
938 |
+
In this simple linear case the process admits representation
|
939 |
+
$$(42)$$
|
940 |
+
$$(43)$$
|
941 |
+
$$\mathbf{x}_{t}=k(t)\mathbf{x}_{0}+\sigma(t)\mathbf{\epsilon},\quad\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})$$
|
942 |
+
$$(44)$$
|
943 |
+
xt = k(t)x0 + σ(t)ϵ, ϵ ∼ N (0, I) (43)
|
944 |
+
where $k(t)=\exp\left(\int\limits_{0}^{t}\sigma(s)ds\right),\sigma^{2}(t)=k^{2}(t)\int\limits_{0}^{t}\frac{\frac{1}{2}\,t^{2}(s)}{t^{2}(s)}ds$. We can rewrite Equation (43) as $\mathbf{x}_{t}=k(t)(\mathbf{x}_{0}+\delta(t)\mathbf{\epsilon})$, and define the SNR as $\delta(t)=\frac{\sigma(t)}{k(t)}$. The density associated to Equation (42) can be expressed as a function of time $t$.
|
945 |
+
ψ as follows
|
946 |
+
|
947 |
+
$$p(\mathbf{x},t)=k(t)^{-D}\left[\exp\biggl{(}\frac{\partial^{2}(t)}{2}\Delta\biggr{)}p_{\mathsf{Bias}}(\mathbf{x})\right]_{\frac{\mathbf{x}}{k(t)}}=k(t)^{-D}\psi(\frac{\mathbf{x}}{k(t)},\sigma^{2}(t)).$$
|
948 |
+
|
949 |
+
The score function associated to Equation (43) has consequently expression
|
950 |
+
|
951 |
+
$$\nabla_{\mathbf{x}}\log p(\mathbf{x},t)=\nabla_{\mathbf{x}}\log\psi({\frac{\mathbf{x}}{k(t)}},\sigma^{2}(t))={\frac{1}{k(t)}}\nabla_{{\frac{\mathbf{x}}{k(t)}}}\log\psi({\frac{\mathbf{x}}{k(t)}},\sigma^{2}(t))={\frac{k(t)d({\frac{\mathbf{x}}{k(t)}};\sigma^{2}(t))-\mathbf{x}}{\sigma^{2}(t)}}.$$
|
952 |
+
|
953 |
+
## J.2 Different Noise Schedules
|
954 |
+
|
955 |
+
Consider a diffusion of the form Equation (38) and a score network s¯θ that approximate the true score.
|
956 |
+
|
957 |
+
Inspecting Equation (45), we parametrize the score network associated to a generic diffusion Equation (42)
|
958 |
+
as a function of the score of the reference diffusion. The score parametrization considered in Kingma et al.
|
959 |
+
|
960 |
+
(2021), can be generalized to arbitrary sdes Karras et al. (2022). In particular, as suggested by Equation (41),
|
961 |
+
we select
|
962 |
+
|
963 |
+
$$(45)$$
|
964 |
+
$$\bar{\mathbf{s}}_{\mathbf{\theta}}(\mathbf{x},t)=\frac{k(t)\mathbf{d}_{\mathbf{\theta}}(\frac{\mathbf{x}}{k(t)};\,\hat{\sigma}^{2}(t))-\mathbf{x}}{\sigma^{2}(t)}\tag{46}$$
|
965 |
+
|
966 |
+
We proceed by showing that the different components of the elbo depends on the diffusion time T only through σ˜(T), but not on k(t), σ(t) singularly for any time *t < T*.
|
967 |
+
|
968 |
+
Theorem 1. Consider a generic diffusion *Equation* (42) *and parametrize the score network as* s¯θ( x k(t)
|
969 |
+
, σ˜(t)).
|
970 |
+
|
971 |
+
Then, the gap term G(s¯θ, T) associated to *Equation* (42) for a diffusion time T depends only on σ˜(T) *but not* on k(t), σ(t) singularly for any time *t < T*.
|
972 |
+
|
973 |
+
Proof. We first rearrange the gap term
|
974 |
+
|
975 |
+
$$2{\mathcal{G}}({\bar{\mathbf{s}}}_{\theta},T)=\int\limits_{t=0}^{T}g^{2}(t){\mathbb{E}}_{\sim({\pm}{\mathbb{Z}})}\left[\left\|{\bar{\mathbf{s}}}_{\theta}(\mathbf{x}_{t},t)-\mathbf{\nabla}\log p(\mathbf{x}_{t},t\,|\,\mathbf{x}_{0})\right\|^{2}\right]{\mathrm{d}}t-$$
|
976 |
+
$$\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(\frac{t}{2})}\left[\left\|\,\boldsymbol{\nabla}\log p(\boldsymbol{x}_{t},t)-\boldsymbol{\nabla}\log p(\boldsymbol{x}_{t},t\,|\,\boldsymbol{x}_{0})\right\|^{2}\right]\mathrm{d}t=$$ $$\int\limits_{t=0}^{T}g^{2}(t)\mathbb{E}_{\sim(\frac{t}{2})}\left[\left\|\,\bar{\boldsymbol{s}}_{\boldsymbol{\theta}}(\boldsymbol{x}_{t},t)-\boldsymbol{\nabla}\log p(\boldsymbol{x}_{t},t)\right\|^{2}\right]\mathrm{d}t$$
|
977 |
+
|
978 |
+
as shown in []2. Then
|
979 |
+
|
980 |
+
Z
|
981 |
+
T
|
982 |
+
t=0
|
983 |
+
g
|
984 |
+
2(t)∥s¯θ(x, t) − ∇ log p(x, t)∥
|
985 |
+
2p(x, t| x0)p*data* (x0)dxdx0dt =
|
986 |
+
Z
|
987 |
+
T
|
988 |
+
t=0
|
989 |
+
g
|
990 |
+
2(t)
|
991 |
+
|
992 |
+
k(t)dθ(
|
993 |
+
x
|
994 |
+
k(t)
|
995 |
+
; ˜σ
|
996 |
+
2(t)) − x
|
997 |
+
σ
|
998 |
+
2(t)−
|
999 |
+
k(t)dθ(
|
1000 |
+
x
|
1001 |
+
k(t)
|
1002 |
+
; ˜σ
|
1003 |
+
2(t)) − x
|
1004 |
+
σ
|
1005 |
+
2(t)
|
1006 |
+
|
1007 |
+
2
|
1008 |
+
p(x, t| x0)p*data* (x0)dxdx0dt =
|
1009 |
+
Z
|
1010 |
+
T
|
1011 |
+
t=0
|
1012 |
+
g
|
1013 |
+
2(t)
|
1014 |
+
|
1015 |
+
k(t)dθ(
|
1016 |
+
x
|
1017 |
+
k(t)
|
1018 |
+
; ˜σ
|
1019 |
+
2(t)) − k(t)d(
|
1020 |
+
x
|
1021 |
+
k(t)
|
1022 |
+
; ˜σ
|
1023 |
+
2(t))
|
1024 |
+
σ
|
1025 |
+
2(t)
|
1026 |
+
|
1027 |
+
2
|
1028 |
+
p(x, t| x0)p*data* (x0)dxdx0dt =
|
1029 |
+
t=0
|
1030 |
+
g
|
1031 |
+
2(t)
|
1032 |
+
k
|
1033 |
+
2(t)
|
1034 |
+
|
1035 |
+
dθ(
|
1036 |
+
x
|
1037 |
+
k(t)
|
1038 |
+
; ˜σ
|
1039 |
+
2(t)) − d(
|
1040 |
+
x
|
1041 |
+
k(t)
|
1042 |
+
; ˜σ
|
1043 |
+
2(t))
|
1044 |
+
σ˜
|
1045 |
+
2(t)
|
1046 |
+
|
1047 |
+
2
|
1048 |
+
p(x, t| x0)p*data* (x0)dxdx0dt =
|
1049 |
+
Z
|
1050 |
+
T
|
1051 |
+
t=0
|
1052 |
+
g
|
1053 |
+
2(t)
|
1054 |
+
k
|
1055 |
+
2(t)
|
1056 |
+
|
1057 |
+
dθ(
|
1058 |
+
x
|
1059 |
+
k(t)
|
1060 |
+
; ˜σ
|
1061 |
+
2(t)) − d(
|
1062 |
+
x
|
1063 |
+
k(t)
|
1064 |
+
; ˜σ
|
1065 |
+
2(t))
|
1066 |
+
σ˜
|
1067 |
+
2(t)
|
1068 |
+
|
1069 |
+
2
|
1070 |
+
ψ(
|
1071 |
+
x
|
1072 |
+
k(t)
|
1073 |
+
, σ˜
|
1074 |
+
2(t)| x0)p*data* (x0)k(t)
|
1075 |
+
−Ddxdx0dt =
|
1076 |
+
Z
|
1077 |
+
T
|
1078 |
+
subst. x˜ =x
|
1079 |
+
k(t)
|
1080 |
+
, dx˜ = dxk
|
1081 |
+
−D(t)
|
1082 |
+
Z
|
1083 |
+
T
|
1084 |
+
t=0
|
1085 |
+
g
|
1086 |
+
2(t)
|
1087 |
+
k
|
1088 |
+
2(t)
|
1089 |
+
|
1090 |
+
dθ(x˜; ˜σ
|
1091 |
+
2(t)) − d(x˜; ˜σ
|
1092 |
+
2(t))
|
1093 |
+
σ˜
|
1094 |
+
2(t)
|
1095 |
+
|
1096 |
+
2
|
1097 |
+
ψ(x˜, σ˜
|
1098 |
+
2(t)| x0)p*data* (x0)dxdx0dt =
|
1099 |
+
subst. r = ˜σ
|
1100 |
+
2(t), dr =
|
1101 |
+
g
|
1102 |
+
2(t)
|
1103 |
+
k
|
1104 |
+
2(t)
|
1105 |
+
dt
|
1106 |
+
σ˜
|
1107 |
+
2(T)
|
1108 |
+
Z
|
1109 |
+
t=0
|
1110 |
+
∥s¯θ(x˜, r) − ∇ log ψ(x˜, r | x0)∥
|
1111 |
+
2ψ(x˜, r)p*data* (x0)dx˜dx0dr
|
1112 |
+
For any k(t), σ(t) such that σ˜(T) is the same, the score matching loss is the same Theorem 2. Suppose that for any ϕ of the auxiliary model νϕ(x) *it exists one* ϕ
|
1113 |
+
′*such that* νϕ
|
1114 |
+
′ (x) =
|
1115 |
+
k
|
1116 |
+
−Dνϕ( x k
|
1117 |
+
), for any k > 0*. Notice that this condition is trivially satisfied if the considered parametric model* has the expressiveness to multiply its output by the scalar k*. Then the minimum of Kullback-Leibler divergence* betweeen p(x, T) associated to a generic diffusion *Equation* (42) *and the density of an auxiliary model* νϕ(x)
|
1118 |
+
depends only on σ˜(T) and not on σ(T) *alone.*
|
1119 |
+
|
1120 |
+
2Citation not included to avoid breaking anonymity
|
1121 |
+
|
1122 |
+
![24_image_0.png](24_image_0.png)
|
1123 |
+
|
1124 |
+
Figure 8: Visualization of few samples at different diffusion times T.
|
1125 |
+
Proof. We start with the equality
|
1126 |
+
|
1127 |
+
kl [p(x, T) ∥ νϕ(x)] = kl k(T) −Dψ(x k(T) , σ˜(T)) ∥ νϕ(x) = kl k(T) −Dψ(x k(T) , σ˜(T)) ∥ k(T) −Dνϕ ′ (x k(T) ) = Zk(T) −Dψ(x k(T) , σ˜(T)) log ψ(x k(T) , σ˜(T)) νϕ ′ (x k(T) ) ! dx = Zψ(x˜, σ˜(T)) log ψ(x˜, σ˜(T)) νϕ ′ (x˜) ! dx˜ = kl hψ(x, σ˜(T)) ∥ νϕ ′ (x) i
|
1128 |
+
Then the minimimum only depends on σ˜(T), as it is always possible to achieve the same value independently on the sde by rescaling the auxiliary model output.
|
1129 |
+
|
1130 |
+
## K Experimental Details
|
1131 |
+
|
1132 |
+
We here give some additional details concerning the experimental (§ 4) settings.
|
1133 |
+
|
1134 |
+
## K.1 Toy Example Details
|
1135 |
+
|
1136 |
+
In the toy example, we use 8192 samples from a simple Gaussian mixture with two components as target p*data* (x). In detail, we have p*data* (x) = πN (1, 0.1 2) + (1 − π)N (3, 0.5 2), with π = 0.3. The choice of Gaussian mixture allows to write down explicitly the time-varying density
|
1137 |
+
|
1138 |
+
$$p(\mathbf{x}_{t},t)=\pi{\mathcal{N}}(1,s^{2}(t)+0.1^{2})+(1-\pi){\mathcal{N}}(3,s^{2}(t)+0.5^{2}),$$
|
1139 |
+
|
1140 |
+
where s 2(t) is the marginal variance of the process at time t. We consider a variance exploding sde of the type dxt = σ tdwt, which corresponds to s 2(t) = σ 2t−1 2 log σ
|
1141 |
+
.
|
1142 |
+
|
1143 |
+
## K.2 § 4 **Details**
|
1144 |
+
|
1145 |
+
We considered Variance Preserving sde with default β0, β1 parameter settings. When experimenting on cifar10 we considered the NCSN++ architecture as implemented in Song et al. (2021c). Training of the score matching network has been carried out with the default set of optimizers and schedulers of Song et al.
|
1146 |
+
|
1147 |
+
(2021c), independently of the selected T.
|
1148 |
+
|
1149 |
+
For the mnist dataset we reduced the architecture by considering 64 features, ch_mult = (1, 2) and attention resolutions equal to 8. The optimizer has been selected as the one in the cifar10 experiment but the warmup has been reduced to 1000 and the total number of iterations to 65000.
|
1150 |
+
|
1151 |
+
## K.3 Varying T
|
1152 |
+
|
1153 |
+
We clarify about the T truncation procedure during both training and testing. The sde parameters are kept unchanged irrespective of T. During training, as evident from Eq. (3), it is sufficient to sample randomly
|
1154 |
+
|
1155 |
+
$$(47)$$
|
1156 |
+
|
1157 |
+
the diffusion time from distribution U(0, T) where T can take any positive value. For testing (sampling) we simply modified the algorithmic routines to begin the reverse diffusion processes from a generic T instead of the default 1.0.
|
1158 |
+
|
1159 |
+
## L Non Curated Samples
|
1160 |
+
|
1161 |
+
We provide for completeness collection of non curated samples for the cifar10 Figs. 9 to 12, mnist dataset Figs. 13 to 16 and celebadataset Fig. 17 and Table 6
|
1162 |
+
|
1163 |
+
Table 6: celeba: fid scores for our method and baseline (T =
|
1164 |
+
1.0)
|
1165 |
+
.
|
1166 |
+
|
1167 |
+
| fid (↓) | nfe (↓) | |
|
1168 |
+
|---------------------------------------|-----------|------|
|
1169 |
+
| Model | (sde) | |
|
1170 |
+
| ScoreSDE Song et al. (2021c) | 3.90 | 1000 |
|
1171 |
+
| Our (T = 0.5) | 8.06 | 500 |
|
1172 |
+
| Our (T = 0.2) | 86.9 | 200 |
|
1173 |
+
| Our with pretrain diffusion (T = 0.5) | 8.58 | 500 |
|
1174 |
+
| Our with pretrain diffusion (T = 0.2) | 86.7 | 200 |
|
1175 |
+
|
1176 |
+
![26_image_0.png](26_image_0.png)
|
1177 |
+
|
1178 |
+
Figure 9: cifar10:Our(left) and Vanilla(right) method at T = 0.2
|
1179 |
+
|
1180 |
+
![26_image_1.png](26_image_1.png)
|
1181 |
+
|
1182 |
+
Figure 10: cifar10:Our(left) and Vanilla(right) method at T = 0.4
|
1183 |
+
|
1184 |
+
![27_image_0.png](27_image_0.png)
|
1185 |
+
|
1186 |
+
Figure 11: cifar10:Our(left) and Vanilla(right) method at T = 0.6
|
1187 |
+
|
1188 |
+
![27_image_1.png](27_image_1.png)
|
1189 |
+
|
1190 |
+
Figure 12: Vanilla method at T = 1.0
|
1191 |
+
|
1192 |
+
![28_image_0.png](28_image_0.png)
|
1193 |
+
|
1194 |
+
Figure 13: MNIST:Our(left) and Vanilla(right) method at T = 0.2
|
1195 |
+
|
1196 |
+
![28_image_1.png](28_image_1.png)
|
1197 |
+
|
1198 |
+
Figure 14: MNIST:Our(left) and Vanilla(right) method at T = 0.4
|
1199 |
+
|
1200 |
+
![29_image_0.png](29_image_0.png)
|
1201 |
+
|
1202 |
+
Figure 15: MNIST: Our(left) and Vanilla(right) method at T = 0.6
|
1203 |
+
|
1204 |
+
![29_image_1.png](29_image_1.png)
|
1205 |
+
|
1206 |
+
Figure 16: MNIST: Vanilla method at T = 1.0
|
1207 |
+
|
1208 |
+
![30_image_0.png](30_image_0.png)
|
1209 |
+
|
1210 |
+
with pretrained score model and Glow (T = 0.5) and Bottom: baseline diffusion (T = 1.0)
|
I9jLbsV3mW/I9jLbsV3mW_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 31,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 2,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 2,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 30,
|
14 |
+
"code": 0,
|
15 |
+
"table": 6,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 84,
|
18 |
+
"unsuccessful_ocr": 9,
|
19 |
+
"equations": 93
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|