RedTachyon commited on
Commit
71a31d2
1 Parent(s): 2042a0f

Upload folder using huggingface_hub

Browse files
NUkEoZ7Toa/11_image_0.png ADDED

Git LFS Details

  • SHA256: 53002f85040623885a440446eead8401780447722d4603c182650ae95fc5adaa
  • Pointer size: 130 Bytes
  • Size of remote file: 22.3 kB
NUkEoZ7Toa/11_image_1.png ADDED

Git LFS Details

  • SHA256: 8806cb8794e2ff92f38f32ef7eb52f3029e9779d72a2f3875006532b8b65b6ce
  • Pointer size: 130 Bytes
  • Size of remote file: 35.7 kB
NUkEoZ7Toa/17_image_0.png ADDED

Git LFS Details

  • SHA256: 369a85837219967388ddbbea6cbd1bc2c8b8e722cdada8699ac734521586d49e
  • Pointer size: 130 Bytes
  • Size of remote file: 19.3 kB
NUkEoZ7Toa/17_image_1.png ADDED

Git LFS Details

  • SHA256: e3108efd5043045e17ff7a2448559cb23e115f6b35098bd1aae3d8a3ce2e1db5
  • Pointer size: 130 Bytes
  • Size of remote file: 31.3 kB
NUkEoZ7Toa/19_image_0.png ADDED

Git LFS Details

  • SHA256: f4a8b0245ff1cf4a4aea883d82c54704329035deb25ea53663f89d5ffbe5d7c2
  • Pointer size: 130 Bytes
  • Size of remote file: 93.9 kB
NUkEoZ7Toa/19_image_1.png ADDED

Git LFS Details

  • SHA256: 521e158b388f77c2864223b3bbf85c9bbb5a818edcad3e2f357e33daed8c5bf1
  • Pointer size: 130 Bytes
  • Size of remote file: 69.7 kB
NUkEoZ7Toa/19_image_2.png ADDED

Git LFS Details

  • SHA256: aa570a57129f814c13b57b528779ab22ea8231d5065f53308b6edcf12e4ef640
  • Pointer size: 131 Bytes
  • Size of remote file: 185 kB
NUkEoZ7Toa/4_image_0.png ADDED

Git LFS Details

  • SHA256: 5bf4b59581d4ff924aa3c8a1ccb359ec9f717a527c4529fc1562898d98773fb7
  • Pointer size: 130 Bytes
  • Size of remote file: 25.1 kB
NUkEoZ7Toa/6_image_0.png ADDED

Git LFS Details

  • SHA256: 6f3de1eaf734046522ba6c09d7296fb09db95cd6881626f51c1bf5fa4c310a2f
  • Pointer size: 130 Bytes
  • Size of remote file: 78.7 kB
NUkEoZ7Toa/9_image_0.png ADDED

Git LFS Details

  • SHA256: 7f27cbf1c2e424a34514d7b0d53d920153205b9e7fc6b5bfe97c4cfd0f7d14b5
  • Pointer size: 130 Bytes
  • Size of remote file: 47.7 kB
NUkEoZ7Toa/9_image_1.png ADDED

Git LFS Details

  • SHA256: 68db66013dbfd45293778f5f7baedfece1c6efa3b4f4baf725f854f2a103ca2d
  • Pointer size: 131 Bytes
  • Size of remote file: 109 kB
NUkEoZ7Toa/NUkEoZ7Toa.md ADDED
@@ -0,0 +1,625 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hierarchical Vae With A Diffusion-Based Vampprior
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ Deep hierarchical variational autoencoders (VAEs) are powerful latent variable generative models. In this paper, we introduce Hierachical VAE with Diffusion-based Variational Mixture of the Posterior Prior (VampPrior). We apply amortization to scale the VampPrior to models with many stochastic layers. The proposed approach allows us to get a better performance compared to original VampPrior work, as well as to achieve state-of-the-art performance compared to other deep hierarchical VAEs, while using much fewer parameters. We empirically validate our method on standard benchmark datasets (MNIST, OMNIGLOT,
8
+ CIFAR10) and demonstrate improved training stability and latent space utilization.
9
+
10
+ ## 1 Introduction
11
+
12
+ Latent variable models (LVMs) parameterized with neural networks constitute a large group in deep generative modeling (Tomczak, 2022). One class of LVMs, Variational Autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), utilize amortized variational inference to efficiently learn distributions over various data modalities, e.g., images (Kingma & Welling, 2014), audio (Van Den Oord et al., 2017) or molecules
13
+ (Gómez-Bombarelli et al., 2018). The expressive power of VAEs could be improved by introducing a hierarchy of latent variables. The resulting hierarchical VAEs such as ResNET VAEs (Kingma et al., 2016), BIVA
14
+ (Maaløe et al., 2019), very deep VAE (VDVAE) (Child, 2021), or NVAE (Vahdat & Kautz, 2020) achieve state-of-the-art performance on images in terms of the negative log-likelihood (NLL). However, hierarchical VAEs are known to have training instabilities (Vahdat & Kautz, 2020). To mitigate these issues, various tricks were proposed, such as gradient skipping (Child, 2021), spectral normalization (Vahdat & Kautz, 2020), or softmax parameterization of variances (Hazami et al., 2022). In this work, we propose a different approach that focuses on two aspects of hierarchical VAEs: (i) the structure of latent variables, and (ii) the form of the prior for the given structure. We introduce several changes to the architecture of parameterizations (i.e. neural networks) and the model itself. As a result, we can train a powerful hierarchical VAE with gradient-based methods and ELBO as the objective without any *hacks*.
15
+
16
+ In the VAE literature, it is a well-known fact that the choice of the prior plays an important role in the resulting VAE performance (Chen et al., 2017; Tomczak, 2022). For example, VampPrior (Tomczak &
17
+ Welling, 2018), a form of the prior approximating the aggregated posterior, was shown to consistently outperform VAEs with a standard prior and a mixture prior. In this work, we extend the VampPrior to deep hierarchical VAEs in an efficient manner. We propose utilizing a non-trainable linear transformation like Discrete Cosine Transform (DCT) to obtain pseudoinputs. Together with our architecture improvements, we can achieve state-of-the-art performance, high utilization of the latent space, and stable training of deep hierarchical VAEs.
18
+
19
+ The contributions of the paper are the following:
20
+ - We propose a new VampPrior-like approximation of the optimal prior (i.e., the aggregated posterior),
21
+ which can efficiently scale to deep hierarchical VAEs.
22
+
23
+ - We propose a latent aggregation component that consistently improves the utilization of the latent space of the VAE.
24
+
25
+ - Our proposed hierarchical VAE with the new class of priors achieves state-of-the-art results across deep hierarchical VAEs on the considered benchmark datasets.
26
+
27
+ ## 2 Background 2.1 Hierarchical Variational Autoencoders
28
+
29
+ Let us consider random variables x ∈ X D (e.g. X = R). We observe N x's sampled from the empirical distribution r(x). We assume that each x has L corresponding latent variables z1:L = (z1*, . . . ,* zL), where zl ∈ RMl and Mlis the dimensionality of each variable. We aim to find a latent variable generative model with unknown parameters θ, pθ(x, z1:L) = pθ(x|z1:L)pθ(z1:L).
30
+
31
+ In general, optimizing latent-variable models with nonlinear stochastic dependencies is non-trivial. A possible solution is approximate inference, e.g., in the form of variational inference (Jordan et al., 1999) with a family of variational posteriors over latent variables {qϕ(z1:L|x)}ϕ. This idea is exploited in Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), in which variational posteriors are referred to as encoders. As a result, we optimize a tractable objective function called the *Evidence Lower* BOund (ELBO) over the parameters of the variational posterior, ϕ, and the generative part, θ, that is:
32
+
33
+ $$\mathbb{E}_{\tau(\mathbf{x})}\left[\ln p_{\theta}(\mathbf{x})\right]\geq\mathbb{E}_{\tau(\mathbf{x})}\bigg{[}\mathbb{E}_{q_{\theta}(\mathbf{z}_{1:L}|\mathbf{x})}\ln p_{\theta}(\mathbf{x}|\mathbf{z}_{1:L})-D_{\mathrm{KL}}\left[q_{\phi}(\mathbf{z}_{1:L}|\mathbf{x})\|p_{\theta}(\mathbf{z}_{1:L})\right]\bigg{]},\tag{1}$$
34
+
35
+ where r(x) is the empirical data distribution. Further, to avoid clutter, we will use Ex [·] instead of Er(x)[·],
36
+ meaning that the expectation is taken with respect to the empirical distribution.
37
+
38
+ ## 2.2 Vampprior
39
+
40
+ The latent variable prior (or marginal) plays a crucial role in the VAE performance which motivates the usage of data-dependent priors. Note that the KL-divergence term in the ELBO (see Eq. 1) could be expressed as follows (Hoffman & Johnson, 2016):
41
+
42
+ $$\mathbb{E}_{\mathbf{x}}D_{\mathrm{KL}}\left[q_{\phi}(\mathbf{z}|\mathbf{x})\|p(\mathbf{z})\right]=D_{\mathrm{KL}}\left[q_{\phi}(\mathbf{z})\|p(\mathbf{z})\right]+\mathbb{E}_{\mathbf{x}}\left[D_{\mathrm{KL}}\left[q_{\phi}(\mathbf{z}|\mathbf{x})\|q(\mathbf{z})\right]\right].$$
43
+
44
+ Therefore, the optimal prior that maximizes the ELBO has the following form:
45
+
46
+ $$\left(2\right)$$
47
+ $$p^{*}(\mathbf{z})=\mathbb{E}_{\mathbf{x}}\left[q_{\phi}(\mathbf{z}|\mathbf{x})\right].$$
48
+ $$(3)$$
49
+ ∗(z) = Ex [qϕ(z|x)] . (3)
50
+ The main problem with the optimal prior in Eq. 3 is the summation over all N training datapoints. Since N could be very large (e.g., tens or hundreds of thousands), using such a prior is infeasible due to potentially very high memory demands. As an alternative approach, Tomczak & Welling (2018) proposed VampPrior, a new class of priors that approximate the optimal prior in the following manner:
51
+
52
+ $$p^{*}(\mathbf{z})=\mathbb{E}_{\mathbf{x}}q_{\phi}(\mathbf{z}|\mathbf{x})\approx\mathbb{E}_{r(\mathbf{u})}q_{\phi}(\mathbf{z}|\mathbf{u}),$$
53
+ ∗(z) = Exqϕ(z|x) ≈ Er(u)qϕ(z|u), (4)
54
+ where u is a *pseudoinput*, i.e., a variable mimicking real data, r(u) = 1K
55
+ Pk δ(u − uk) is the distribution of u in the form of the mixture of Dirac's deltas, and {uk}
56
+ K
57
+ k=1 are learnable parameters (we will refer to them as pseudoinputs as well). K is a hyperparameter and is assumed to be smaller than the size of the training dataset, *K < N*. Pseudoinputs are randomly initialized before training and are learned along with model parameters by optimizing the ELBO objective using a gradient-based method.
58
+
59
+ In follow-up work, Egorov et al. (2021) suggested using a separate objective for pseudoinputs (a greedy booting approach) and demonstrated the superior performance of such formulation in the continual learning setting. Here, we will present a different approximation to the optimal prior instead.
60
+
61
+ $$\left({4}\right)$$
62
+
63
+ ## 2.3 Ladder Vaes (A.K.A. Top-Down Vaes)
64
+
65
+ We refer to models with many stochastic layers L as deep hierarchical VAEs. They can differ in the way the prior and variational posterior distributions are factorized and parameterized. Here, we follow the factorization proposed in Ladder VAE (Sønderby et al., 2016) that considers the prior distribution over the latent variables factorized in an autoregressive manner:
66
+
67
+ $$p_{\theta}({\bf z}_{1},\ldots,{\bf z}_{L})=p_{\theta}({\bf z}_{L})\prod_{l=1}^{L-1}p_{\theta}({\bf z}_{l}|{\bf z}_{l+1:L}).\tag{1}$$
68
+ $$\left({5}\right)$$
69
+
70
+ Next, using the top-down inference model results in the following variational posteriors (Sønderby et al.,
71
+ 2016):
72
+
73
+ $$q_{\phi}(\mathbf{z}_{1},\ldots,\mathbf{z}_{L}|\mathbf{x})=q_{\phi}(\mathbf{z}_{L}|\mathbf{x})\prod_{l=1}^{L-1}q_{\phi}(\mathbf{z}_{l}|\mathbf{z}_{l+1:L},\mathbf{x}).\tag{1}$$
74
+ $$({\mathfrak{h}})$$
75
+
76
+ This factorization was previously used by successful deep hierarchical VAEs, among others, NVAE (Vahdat &
77
+ Kautz, 2020) and Very Deep VAE (VDVAE) (Child, 2021). It was shown empirically that such a formulation allows for achieving state-of-the-art performance on several image datasets.
78
+
79
+ ## 3 Our Model: Diffusion-Based Vampprior Vae
80
+
81
+ In this work, we introduce the Diffusion-based VampPrior VAE (DVP-VAE). It is a deep hierarchical VAE model that approximates the optimal prior distribution at all levels of the hierarchical VAE in an efficient way.
82
+
83
+ ## 3.1 Vampprior For Hierarchical Vae
84
+
85
+ Directly using the VampPrior approximation (see Eq. 4) for deep hierarchical VAE can be very computationally expensive since it requires evaluating the variational posterior of all latent variables for K pseudoinputs at each training iteration. Thus, (Tomczak & Welling, 2018) proposed a modification in which only the top latent variable uses VampPrior, namely:
86
+
87
+ $$p^{*}(\mathbf{z}_{1:L})=\mathbb{E}_{\mathbf{x}}q_{\phi,\theta}(\mathbf{z}_{1:L}|\mathbf{x})\approx\mathbb{E}_{\mathbf{x}}q_{\phi,\theta}(\mathbf{z}_{L}|\mathbf{x})p_{\theta}(\mathbf{z}_{1:L-1})\approx\mathbb{E}_{r(\mathbf{u})}q_{\phi,\theta}(\mathbf{z}_{L}|\mathbf{u})p_{\theta}(\mathbf{z}_{1:L-1}),\tag{7}$$
88
+
89
+ where r(u) = 1K
90
+ Pk δ(u − uk) with learnable pseudoinputs {uk}
91
+ K
92
+ k=1. In this approach, there are a few problems: (i) how to pick the *best* number of pseudoinputs K, (ii) how to train pseudoinputs, and (iii) how to train the VampPrior in a scalable fashion. The last issue results from the first two problems and the fact that the dimensionality of pseudoinputs is the same as the original data, i.e., dim(u) = dim(x).
93
+
94
+ Here, we propose a different prior parameterization to overcome all these three problems. Our approach
95
+ consists of three steps in which we approximate the VampPrior at all levels of the deep hierarchical VAE.
96
+ We propose to *amortize* the distribution of pseudoinputs in VampPrior and use them to *directly* condition
97
+ the prior distribution:
98
+ $$p^{*}(\mathbf{z}_{1:L})=\mathbb{E}_{\mathbf{x}}q_{\phi,\theta}(\mathbf{z}_{1:L}|\mathbf{x})\approx\mathbb{E}_{\mathbf{x},r(\mathbf{u}|\mathbf{x})}p_{\theta}(\mathbf{z}_{1:L}|\mathbf{u}),$$
99
+ where we use r(u) = Ex[r(u|x)]. Using r(u|x), which is cheap to evaluate for any input, we avoid the
100
+ expensive procedure of encoding pseudoinputs along with the inputs x. Amortizing the VampPrior solves
101
+ the problem of picking K and helps with training pseudoinputs.
102
+
103
+ To define conditional distribution, we treat pseudoinputs u as a result of some noisy non-trainable transformation of the input datapoints x. Let us consider a transformation f : X
104
+ D → X P , i.e., u = f(x) + σε, where ε is a standard Gaussian random variable and σ is the standard deviation. Note that by applying f, e.g., a linear transformation, we can lower the dimensionality of pseudoinputs, dim(u) < dim(x), resulting in better scalability. As a result, we get the following amortized distribution:
105
+
106
+ $$r(\mathbf{u}|\mathbf{x})={\mathcal{N}}(\mathbf{u}|f(\mathbf{x}),\sigma^{2}I).$$
107
+ r(u|x) = N (u|f(x), σ2I). (9)
108
+ $$({\boldsymbol{\delta}})$$
109
+
110
+ 3
111
+
112
+ | Algorithm 1 fdct: Create DCT-based pseudoinputs | † |
113
+ |-----------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
114
+ | | Algorithm 2 f dct: Invert DCT-based pseudoinputs Input: uDCT ∈ R c×d×d , S ∈ R c×d×d , D uDCT = uDCT · S uDCT = zero_pad(uDCT, D − d) ux = iDCT(uDCT) c×D×D Return: ux ∈ R |
115
+ | Input: x ∈ R c×D×D, S ∈ R c×d×d , d ∈ R uDCT = DCT(x) uDCT = Crop(uDCT, d) uDCT uDCT = S Return: uDCT ∈ R c×d×d | |
116
+
117
+ The crucial part then is how to choose the transformation f. It is a non-trivial choice since we require the following properties of f: (i) it should result in dim(u) < dim(x), (ii) u should be a *reasonable* representation of x, (iii) it should be *easily* computable (e.g., fast for scalability). We have two candidates for such transformations. First, we can consider a downsampled version of an image. Second, we propose to use a discrete cosine transform. We will discuss this approach in the following subsection.
118
+
119
+ Moreover, the outlined amortized VampPrior in the previous two steps seems to be a good candidate for efficient and scalable training. However, it does not seem to be suitable for generating new data. Therefore, we propose including pseudoinputs as the final level in our model and use a marginal distribution rˆ(u) that approximates r(u). Here, we propose to use a diffusion-based model for rˆ(u).
120
+
121
+ ## 3.2 Dct-Based Pseudoinputs
122
+
123
+ The first important component in our approach is the form of the non-trainable transformation from the input to the pseudoinput space. We assume that for u to be a *reasonable* representation of x means that u should preserve general patterns (information) of x but it does not necessarily contain any high-frequency details of x. To achieve this, we propose to use a *discrete cosine transform*1(DCT) to convert the input into a frequency domain and then filter the high-frequency component. DCT DCT (Ahmed et al., 1974) is a widely used transformation in signal processing for image, video, and audio data. For example, it is part of the JPEG standard (Pennebaker & Mitchell, 1992). For instance, let us consider a signal as a 3-dimensional tensor x ∈ R
124
+ c×D×D. DCT is a linear transformation that decomposes each channel xi on a basis consisting of cosine functions of different frequencies: u*DCT ,i* = CxiC⊤, where for all pairs (k = 0, n): Ck,n =
125
+ q1 D
126
+ , and for all pairs (*k, n*) such that k > 0: Ck,n =
127
+ q2 D
128
+ cos πD
129
+ n +
130
+ 1 2 k.
131
+
132
+ Our transformation We use the DCT transform as the first step in the procedure of computing pseudoinputs. Let us assume that each channel of x is D × D. We select the desired size of the context *d < D*
133
+ and remove (crop) D − d bottom rows and right-most columns for each channel in the frequency domain since they contain the highest-frequency information. Finally, we perform normalization using the matrix S that contains the maximal absolute value of each frequency. We calculate this matrix once (before training the model) using all the training data: S = maxx∈Dtrain |DCT(x)|. The complete procedure is described in Algorithm 1 and we denote it as fdct.
134
+
135
+ In the frequency domain, a pseudoinput has a smaller spatial dimension than its corresponding datapoint.
136
+
137
+ This allows us to use a small prior model and lower the memory consumption. However, we noticed empirically that conditioning the amortized VampPrior (see Eq. 8) on the pseudoinput in the original domain makes training easier. Therefore, the pseudo-inverse is applied as a part of the TopDown path. First, we start by multiplying the pseudpoinput by the normalization matrix S. Afterward, we pad each channel with zeros to account for the "lost" high frequencies. Lastly, we apply the inverse of the Discrete Cosine Transform (iDCT). We denote the procedure for converting a pseudoinput from the frequency domain to the data domain as f
138
+
139
+ dct and describe it in Algorithm 2.
140
+
141
+ 1We consider the most widely used type-II DCT.
142
+
143
+ ![4_image_0.png](4_image_0.png)
144
+
145
+ $$(10)$$
146
+
147
+ Figure 1: Graphical model of the TopDown hierarchical VAE with three latent variables (a) without pseudoinputs and (b) with pseudoinputs. The inference model (left) and the generative model (right) share parameters in the TopDown path (blue). The dashed arrow represents a non-trainable transformation.
148
+
149
+ ## 3.3 Our Ladder Vae And Training Objective
150
+
151
+ We use the TopDown architecture and extend this model with a deterministic, non-trainable function to create the pseudoinput. In our generative model, pseudoinputs are treated as another set of latent variables, namely:
152
+
153
+ $p_{\theta}(\mathbf{x},\mathbf{z}_{1:L},\mathbf{u})=p_{\theta}(\mathbf{x}|\mathbf{z}_{1:L})p_{\theta}(\mathbf{z}_{1:L}|\mathbf{u})r(\mathbf{u}),$ $$p_{\theta}(\mathbf{z}_{1:L}|\mathbf{u})=p_{\theta}(\mathbf{z}_{L}|\mathbf{u})\prod_{l=1}^{L-1}p_{\theta}(\mathbf{z}_{l}|\mathbf{z}_{l+1:L},\mathbf{u}).$$
154
+ $$(11)$$
155
+ Then, we choose variational posteriors in which pseudoinput latent variables are conditionally independent with all other latent variables given the data point x, that is:
156
+
157
+ $q_{\phi}(\mathbf{z}_{1:L},\mathbf{u}|\mathbf{x})=q_{\phi}(\mathbf{z}_{1:L}|\mathbf{x})r(\mathbf{u}|\mathbf{x}),$ $$q_{\phi}(\mathbf{z}_{1:L}|\mathbf{x})=q_{\phi}(\mathbf{z}_{L}|\mathbf{x})\prod_{l=1}^{L-1}q_{\phi}(\mathbf{z}_{l}|\mathbf{z}_{l+1:L},\mathbf{x}),$$ $$r(\mathbf{u}|\mathbf{x})=\mathcal{N}(\mathbf{u}|f(\mathbf{x}),\sigma^{2}I).$$
158
+ $$(12)$$
159
+ $$(13)$$
160
+ $$(14)$$
161
+ In the variational posteriors, we use the amortization of r(u) outlined in Eq. 8.
162
+
163
+ Let us consider a Ladder VAE (a TopDown VAE) with three levels of latent variables. We depict the graphical model of this latent variable model in Figure 1a with the inference model on the left and the generative model on the right. Note that inference and generative models use shared parameters in the TopDown path, denoted by the blue arrows.
164
+
165
+ In Figure 1b we show a graphical model of our proposed model that additionally contains pseudoinputs u. The generation model (Figure 1b right) is conditioned on the pseudoinput variable u at each level. This formulation is similar to the iVAE model (Khemakhem et al., 2020) in which the auxiliary random variable u is used to enforce the identifiability of the latent variable model. However, unlike iVAE, we do not treat u as an observed variable. Instead, we introduce the pseudoinput prior distribution pγ(u) with learnable parameters γ. This allows us to get unconditional samples from the model. Finally, the Evidence Lower BOund for our model takes the following form:
166
+
167
+ $\mathcal{L}(\mathbf{x},\phi,\theta,\gamma)=\mathbb{E}_{q_{\theta}(\mathbf{z}_{1:L}|\mathbf{x})}\ln p_{\theta}(\mathbf{x}|\mathbf{z}_{1:L})-\mathbb{E}_{q_{\phi}(\mathbf{u}|\mathbf{x})}D_{\mathrm{KL}}\left[q_{\phi}(\mathbf{z}_{1:L}|\mathbf{x})\|p_{\theta}(\mathbf{z}_{1:L}|\mathbf{u})\right]-D_{\mathrm{KL}}\left[r(\mathbf{u}|\mathbf{x})\|r(\mathbf{u})\right].$
168
+ The ELBO for our model gives a straightforward objective for training an approximate prior by minimizing the Kullback-Leibler between r(u|x) and the amortized VampPrior r(u) = Ex[r(u|x)]. However, calculating
169
+
170
+ $$[|\mathbf{u}\rangle]-D_{\mathrm{KL}}\left[r(\mathbf{u}|\mathbf{x})\|r(\mathbf{u})\right].$$
171
+ $$(15)$$
172
+
173
+ the KL-term with r(u) is computationally demanding and, in fact, using r(u) as the prior does not help us solve the original problem of using the VampPrior. Therefore, in the following section, we propose to use an approximation rˆγ(u) instead.
174
+
175
+ ## 3.4 Diffusion-Based Vampprior
176
+
177
+ Even though pseudoinputs are assumed to be much simpler than the observed datapoint x (e.g., in terms of their dimensionality), a very flexible prior distribution rˆγ(u) is required to ensure the high quality of the final samples. Since we cannot use the amortized VampPrior directly, following (Vahdat et al., 2021; Wehenkel &
178
+ Louppe, 2021), we propose to use a diffusion-based generative model (Ho et al., 2020) as the prior and the approximation of r(u).
179
+
180
+ Diffusion models are flexible generative models and, in addition, can be seen as latent variable generative models (Kingma et al., 2021). As a result, we have access to the lower bound on its log-likelihood function, namely:
181
+
182
+ $$\log\hat{r}_{\gamma}({\bf u})\geq L_{vib}({\bf u},\gamma)=\mathbb{E}_{q({\bf y}_{0}|{\bf u})}\big{[}\ln r({\bf u}|{\bf y}_{0})\big{]}-D_{\rm KL}\left[q({\bf y}_{1}|{\bf u})\|r({\bf y}_{1})\right]$$ $$-\sum_{i=1}^{T}\mathbb{E}_{q({\bf y}_{i/T}|{\bf u})}D_{\rm KL}\left[q({\bf y}_{(i-1)/T}|{\bf y}_{i/T},{\bf u})\|r_{\gamma}({\bf y}_{(i-1)/T}|{\bf y}_{i/T})\right].$$
183
+ $$\left(16\right)$$
184
+
185
+ We provide more details on diffusion models and the derivation of the ELBO in Appendix A. We refer to this prior as *Diffusion-based VampPrior* (DVP). This prior allows us to sample infinitely many pseudoinputs, unlike the original VampPrior that uses a fixed set of K pseudoinputs.
186
+
187
+ Now, we can plug this lower bound into objective (Eq. 15) and obtain the final objective of our Ladder VAE
188
+ with pseudoinputs and the Diffusion-based VampPrior (dubbed DVP-VAE):
189
+
190
+ $$\max_{\mathbf{u},\theta,\gamma}\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\ln p_{\theta}(\mathbf{x}|\mathbf{z})-\mathbb{E}_{r(\mathbf{u}|\mathbf{x})}D_{\mathrm{KL}}\left[q_{\phi}(\mathbf{z}|\mathbf{x})\|p_{\theta}(\mathbf{z}|\mathbf{u})\right]+\mathbb{H}[r(\mathbf{u}|\mathbf{x})]-\mathbb{E}_{r(\mathbf{u}|\mathbf{x})}L_{r\mathrm{th}}(\mathbf{u},\gamma)\right].\tag{17}$$
191
+
192
+ Note that H[r(u|x)] = P2 log(2π exp σ 2) where σ is a learnable parameter (see Eq. 9) and P is the dimensionality of the pseudoinput.
193
+
194
+ ## 4 Model Details: Architecture And Parameterization
195
+
196
+ The model architecture and its parameterization are crucial to the scalability of the model. In this section, we discuss the specific choices we made. The starting point for our architecture is the architecture proposed in VDVAE(Child, 2021). However, there are certain differences. We schematically depict our architecture in Figure 2a. We consider a hierarchical TopDown VAE with L stochastic layers, namely, latent variables z1*, . . . ,* zL. We assume that each latent variable has the same number of channels, but they differ in spatial dimensions: zl ∈ R
197
+ c×hl×wl. We refer to different spatial dimensions of the latent space as *scales*.
198
+
199
+ ## 4.1 Bottom-Up
200
+
201
+ The bottom-up part corresponds to calculations of intermediary variables dependent on x. We follow the implementation of (Child, 2021) for it. We start from the bottom-up path depicted in Figure 2a (left), which is fully deterministic and consists of several ResNet blocks (see Figure 2c). The input is processed by Nenc blocks at each scale, and the output of the last resnet block of each scale is passed to the TopDown path in Figure 2a (right). Note that here Nenc is a separate hyperparameter that does not depend on the number of stochastic layers L.
202
+
203
+ ## 4.2 Topdown
204
+
205
+ The TopDown path depicted in Figure 2a (right) computes the parameters of the variational posterior and the prior distribution starting from the top latent variable zL.
206
+
207
+ ![6_image_0.png](6_image_0.png)
208
+
209
+ Figure 2: A diagram of the DVP-VAE: TopDown hierarchical VAE with the diffusion-based VampPrior.
210
+
211
+ (a) *BottomUp* path (left) and *TopDown* path (right). (b) *TopDown* block which takes features from the block above h dec, encoder features h enc (only during training) and a pseudoinput u as inputs. (c) A single Resnet block. (d) A single pseudoinput block.
212
+ The first step is the pseudoinput block shown in Figure 2d. Using the deterministic function fdct, it creates the pseudoinput random variable from the input x (see Algorithm 1) that is used to train the Diffusionbased VampPrior rˆγ(u). At the test time, a pseudoinput is sampled using this unconditional prior. The pseudoinput sample is then converted back to the input domain (see Algorithm 2) and used to condition prior distributions at all levels, pθ(z1:L|u).
213
+
214
+ Next, the model has L TopDown blocks depicted in Figure 2b. Each TopDown block takes deterministic features from the corresponding scale of the bottom-up path denoted as henc, the output of the pseudoinput block u, and deterministic features from the TopDown block above hdec as inputs. Our implementation of this block is similar to the VDVAE architecture, but there are several differences that we summarize below:
215
+ - *Incorporating pseudoinputs* We concatenate the pseudoinput (properly reshaped using average pooling) with hdec to compute the parameters of the prior distribution.
216
+
217
+ - *Variational posterior parameters* We assume that both hdec and henc have the same number of channels, allowing us to sum them instead of concatenating. This reduces the total number of parameters and memory consumption.
218
+
219
+ - *Additional ResNet Connections* Our TopDown block has three ResNet blocks (depicted in orange in Figure 2). In contrast to our architecture, in VDVAE only the block that updates hdec has a residual connection.
220
+
221
+ We did not observe any training instabilities and did not apply the *gradient skipping* used in VDVAE (Child, 2021).
222
+
223
+ ## 4.3 Latent Aggregation In Conditional Likelihood
224
+
225
+ The last important element of our architecture is the *aggregation of latents*. Let us denote samples from either the variational posteriors qϕ(z1:L|x) during training or the prior pθ(z1:L|u) during generating new data as z˜1*, . . . ,* z˜L. Furthermore, let h1 be the output of the last TopDown block. These deterministic features are computed as a function of all samples z˜1*, . . . ,* z˜L. Therefore, it can be used to calculate the final likelihood value. However, we observe empirically that in such parametrization some layers of latent variables tend to be completely ignored by the model. Instead, we propose to enforce a strong connection between the conditional likelihood and all latent variables by explicitly conditioning on all of the sampled latent variables, namely:
226
+
227
+ $$p_{\theta}(\mathbf{x}|\mathbf{z}_{1:L})=p_{\theta}\left(\mathbf{x}\Bigg|\mathrm{NN}\left({\frac{1}{\sqrt{L}}}\sum_{l}{\tilde{\mathbf{z}}}_{l}\right)\right).$$
228
+ $$(18)$$
229
+
230
+ We refer to this as the *latent aggregation*. We show empirically in the experimental section that this leads to a consistently high ratio of active units.
231
+
232
+ ## 5 Related Work
233
+
234
+ Latent prior in VAEs Original VAE formulation uses the standard Gaussian distribution as a prior over the latent variables. This can be an overly simplistic choice, as the prior minimizing the Evidence Lower bound is given by the aggregated posterior (Hoffman & Johnson, 2016; Tomczak & Welling, 2018).
235
+
236
+ Furthermore, using unimodal prior with multimodal real-world data can lead to non-smooth encoder or meaningless distances in the latent space Bozkurt et al. (2019). More flexible prior distributions proposed in the literature include the Gaussian Mixture Model (Jiang et al., 2016; Nalisnick et al., 2016; Tran et al., 2022), the autoregressive normalizing flow (Chen et al., 2017), the autoregressive model (Gulrajani et al., 2016; Sadeghi et al., 2019), rejection sampling distribution with the learned acceptance function Bauer & Mnih
237
+ (2019), the diffusion-based prior (Vahdat et al., 2021; Wehenkel & Louppe, 2021). The VampPrior (Tomczak
238
+ & Welling, 2018) proposes to use an approximation of the aggregated posterior as a prior distribution. The approximation is constructed using learnable pseudoinputs to the encoder. This work can be seen as an efficient extension of the VampPrior to deep hierarchical VAE, which also utilizes a diffusion-based prior over the pseudoinputs.
239
+
240
+ Auxiliary Variables in VAEs Several works consider auxiliary variables u as a way to improve the flexibility of the variational posterior. Maaløe et al. (2016) use auxiliary variables with one-level VAE to improve the variational approximation while keeping the generative model unchanged. Salimans et al. (2015) use Markov transition kernel for the same expressivity purpose. The authors treat intermediate MCMC samples as an auxiliary random variable and derive evidence lower bound of the extended model. Ranganath et al.
241
+
242
+ (2016) introduce hierarchical variational models. They increase the flexibility of the variational approximation by imposing prior on its parameters. In this setting, it assumes that the latent variable z and the auxiliary variable u are not conditionally independent and the variational posterior factorizes, for example, as follows:
243
+ qϕ(u, z|x) = qϕ(u|x)qϕ(z|u, x). (19)
244
+ In this work, in contrary, we use auxiliary variable to increase the prior flexibility and use conditional independence assumption in the variational posterior:
245
+
246
+ $$q_{\phi}(\mathbf{u},\mathbf{z}|\mathbf{x})=q_{\phi}(\mathbf{u}|\mathbf{x})q_{\phi}(\mathbf{z}|\mathbf{u},\mathbf{x}).$$
247
+ $$q_{\phi}(\mathbf{u},\mathbf{z}|\mathbf{x})=q_{\phi}(\mathbf{u}|\mathbf{x})q_{\phi}(\mathbf{z}|\mathbf{x}).$$
248
+ qϕ(u, z|x) = qϕ(u|x)qϕ(z|x). (20)
249
+ Khemakhem et al. (2020) consider the non-identifiability problem of VAEs. They propose to use auxiliary observation u and use it to condition the prior distribution. This additional observation is similar to the
250
+
251
+ $$(19)$$
252
+ $$(20)$$
253
+
254
+ pseudoinputs that we consider in our work. However, we define a way to construct u from the input and learn a prior distribution to sample it during inference, while Khemakhem et al. (2020) require u to be observed both during training and at the inference time.
255
+
256
+ Similarly to our work, (Klushyn et al., 2019) consider hierarchical prior pθ(z|u)p(u). However, they treat u rather as a second layer of latent variables and learn a variational posterior in the form qϕ(u, z|x) = qϕ(u|z)qϕ(z|x). Latent Variables Aggregation There are different ways in which the conditional likelihood pθ(x|z1:L)
257
+ can be parameterized. In LadderVAE (Sønderby et al., 2016), where TopDown hierarchical VAE was originally proposed, the following formulation is used:
258
+
259
+ $$p_{\theta}(\mathbf{x}|\mathbf{z}_{1:L})=p_{\theta}(\mathbf{x}|\mathrm{NN}(\mathbf{z}_{1})).$$
260
+ $$(21)$$
261
+ pθ(x|z1:L) = pθ(x|NN(z1)). (21)
262
+ That is, the conditional likelihood depends directly on the bottom latent variable z1 only.
263
+
264
+ Later, NVAE (Vahdat & Kautz, 2020) and VDVAE (Child, 2021) use a deterministic path in the TopDown architecture in the conditional likelihood, namely:
265
+
266
+ $$(22)$$
267
+ $$p_{\theta}(\mathbf{x}|\mathbf{z}_{1:L})=p_{\theta}(\mathbf{x}|\mathrm{NN}(\mathbf{h}_{1})).$$
268
+ pθ(x|z1:L) = pθ(x|NN(h1)). (22)
269
+ Note that deterministic features depend on all the latent variables. However, we propose to use a more explicit dependency on latent variables in Eq. 18. Our idea bears some similarities with Skip-VAE Dieng et al. (2019). Skip-VAE proposes to add a latent variable to each layer of the neural network parameterizing decoder of the VAE with a single stochastic layer. In this work, instead, we add all the latent variables together to parameterize conditional likelihood.
270
+
271
+ ## 6 Experiments 6.1 Settings
272
+
273
+ We evaluate DVP-VAE on dynamically binirized MNIST (LeCun, 1998) and OMNIGLOT (Lake et al., 2015). Furthermore, we conduct experiments on natural images using CIFAR10 (Alex, 2009) dataset. We provide the complete set of hyperparameters for training the DVP-VAE in Appendix C.1.
274
+
275
+ ## 6.2 Main Quantitative And Qualitative Results
276
+
277
+ We report all results in Table 1, where we compare the proposed approach with other hierarchical VAEs. We observe that DVP-VAE outperforms all the VAE models while using fewer parameters than other models.
278
+
279
+ For instance, on CIFAR10, our DVP-VAE requires 20M weights to beat Attentive VAE with about 6 times more weights. Furthermore, because of the smaller model size, we were able to obtain all the results using a single GPU. We show the unconditional samples in Figure 3 (see Appendix D for more samples). The top row of each image shows samples from the diffusion-based VampPrior (i.e., pseudoinputs), while the second row shows corresponding samples from the VAE. We observe that, as expected, pseudoinputs define the general appearance of an image, while a lot of details are added later by the TopDown decoder. This effect can be further observed in Figure 4 where we plot the reconstructions using different numbers of latent variables. In the first row, only a pseudoinput corresponding to the original image is used (i.e., u ∼ r(u|x))
280
+ while the remaining latent variables are sampled from the prior with low temperature. Each row below uses more latent variables from the variational posterior grouped by the scales. Namely, the second row uses the pseudoinput above and all the 4 × 4 latent variables from the variational posterior, then the third row uses additionally 8 × 8 latent variables and so on.
281
+
282
+ ## 6.3 Ablation Studies
283
+
284
+ Training stability and convergence In all of our experiments, we did not observe many training instabilities. Unlike many contemporary VAEs, We did not use gradient skipping (Child, 2021), spectral Table 1: The test performance: negative log likelihood on MNIST and OMNIGLOT, and bits-per-dimension
285
+ (BPD) on CIFAR10.
286
+
287
+ ‡ Results with data augmentation.
288
+
289
+ ∗ Results averaged over 4 random seeds.
290
+
291
+ | | MNIST | OMNIGLOT | CIFAR10 | | | |
292
+ |---------------------------------------------|---------|----------------|-----------|------|---------|-------|
293
+ | Model | L | − log p(x) ≤ ↓ | Size | L | bpd ≤ ↓ | |
294
+ | DVP-VAE (ours) | 8 | 77.10∗ | 89.07∗ | 20M | 28 | 2.73 |
295
+ | Attentive VAE (Apostolopoulou et al., 2022) | 15 | 77.63 | 89.50 | 119M | 16 | 2.79 |
296
+ | CR-NVAE (Sinha & Dieng, 2021) | 15 | 76.93‡ | - | 131M | 30 | 2.51‡ |
297
+ | VDVAE (Child, 2021) | - | - | - | 39M | 45 | 2.87 |
298
+ | OU-VAE (Pervez & Gavves, 2021) | 5 | 81.10 | 96.08 | 10M | 3 | 3.39 |
299
+ | NVAE (Vahdat & Kautz, 2020) | 15 | 78.01 | - | - | 30 | 2.91 |
300
+ | BIVA(Maaløe et al., 2019) | 6 | 78.41 | 91.34 | 103M | 15 | 3.08 |
301
+ | VampPrior (Tomczak & Welling, 2018) | 2 | 78.45 | 89.76 | - | - | - |
302
+ | LVAE (Sønderby et al., 2016) | 5 | 81.74 | 102.11 | - | - | - |
303
+ | IAF-VAE(Kingma et al., 2016) | - | 79.10 | - | - | 12 | 3.11 |
304
+
305
+ Figure 3: Unconditional samples from the
306
+
307
+ ![9_image_0.png](9_image_0.png)
308
+
309
+ Diffusion-based VampPrior (top rows)
310
+ and corresponding samples from the DVP-VAE (bottom rows).
311
+
312
+ Figure 4: *Generative reconstruction*s. The top row is using a
313
+
314
+ ![9_image_1.png](9_image_1.png)
315
+
316
+ pseudoinput sampled from r(u|x) **only**.
317
+ normalization (Vahdat & Kautz, 2020), or softmax parameterization of variances (Hazami et al., 2022). We use the Adamax version of the Adam optimizer (Kingma & Ba, 2015) following (Hazami et al., 2022), as it demonstrated much better convergence for the model with a mixture of discretized logistic conditional likelihood.
318
+
319
+ First, we observe consistent performance improvement as we increase the model size and the number of stochastic layers. In Table 2, we report test performance and the percentage of active units (see Section 6.3 for details) for models of different stochastic depths trained on the CIFAR10 dataset. We train each model for 500 epochs, which corresponds to less than 200k training iterations. Additionally, we report gradient norms and training and validation losses for all four models in Appendix B.
320
+
321
+ To demonstrate the advantage of the proposed architecture, we compare our model to a closest deep hierarchical VAE architecture: Very Deep VAE(Child, 2021). For this experiment, we chose hyperparameters closest to Table 4 in Child (2021) (CIFAR-10). That is, our model has 45 stochastic layers and a comparable number of trainable parameters. Furthermore, following Child (2021), we train this model with a batch size of 32, a gradient clipping threshold of 200, and an EMA rate of 0.9998. However, in DVP-VAE, we were able to eliminate gradient skipping and gradient smoothing. We report the difference in key hyperparameters and test performance in Table 3. We also add comparison with Efficient-VDVAE (see Table 3 in Hazami et al.
322
+
323
+ | (or approximately 200k training iterations) on CIFAR10. | VDVAE | Efficient-VDVAE | DVP-VAE | | | | |
324
+ |-----------------------------------------------------------|-----------------|-----------------------|-----------|--------|----|----|----|
325
+ | | (Child, 2021) | (Hazami et al., 2022) | (ours) | | | | |
326
+ | L | Size | BPD≤ ↓ | AU ↑ | | | | |
327
+ | 20 | 24M | 2.99 | 94% | | | | |
328
+ | 28 | 32M | 2.94 | 93% | | | | |
329
+ | 36 | 40M | 2.89 | 98% | | | | |
330
+ | 44 | 48M | 2.84 | 97% | L | 45 | 47 | 45 |
331
+ | | Size | 39M | 57M | 38M | | | |
332
+ | | Optimizer | AdamW | Adamax | Adamax | | | |
333
+ | | Learning rate | 2e-4 | 1e-3 | 1e-3 | | | |
334
+ | | Grad. smoothing | - | Yes | - | | | |
335
+ | | Grad. skip | 400 | 800 | - | | | |
336
+ | | Training iter. | 1.1M | 0.8M | 0.4M | | | |
337
+ | | Test BPD | 2.87 | 2.87 | 2.86 | | | |
338
+
339
+ Table 2: Test performance for the model trained for 500 epochs
340
+ (or approximately 200k training iterations) on CIFAR10.
341
+
342
+ Table 3: Training settings for the model trained on CIFAR-10 compared to two VDVAE implementations.
343
+ (2022)). We observe that DVP-VAE achieves comparable performance within much fewer training iterations than both VDVAE implementations.
344
+
345
+ Latent Aggregation Increases Latent Space Utilization Next, we test the claim that latent variable aggregation discussed in Sec. 4.3 improves latent space utilization. We use Active Units (AU) metric (Burda et al., 2015), which can be calculated for a given threshold δ as follows:
346
+
347
+ $$ \mathrm{AU} = \frac{\sum_{l=1}^L \sum_{l=1}^{M_l} \left[ \mathrm{A}_{l,i} > \delta \right]}{\sum_{l=1}^L M_l},$$ where $\mathrm{A}_l=\mathrm{Var}_{q^{\mathrm{test}}(\mathbf{x})}\mathbb{E}_{q_\phi(\mathbf{z}_{l+1:L}|\mathbf{x})}\mathbb{E}_{q_\phi(\mathbf{z}_l|\mathbf{z}_{l+1:L,\mathbf{x}})}\left[\mathbf{z}_l\right].$
348
+ $$(23)$$
349
+ $$(24)$$
350
+
351
+ Here Mlis the dimensionality of the stochastic layer l, [B] is the Iverson bracket, which equals 1 if B is true and 0 otherwise, and Var stands for the variance. Following Burda et al. (2015), we use the threshold δ = 0.01. The higher the share of active units, the more efficient the model is in utilizing its latent space.
352
+
353
+ We report results in Table 4 and observe that the model with latent aggregation always attains more than 90% of active units. Furthermore, the latent aggregation considerably improves the utilization of the latent space if we compare it with exactly the same model but with the conditional likelihood parametrized using deterministic feature from the TopDown path (see Eq. 22).
354
+
355
+ Amortized VampPrior Improves BPD Further, we test how the proposed amortized VampPrior improves model performance as measured by the negative log-likelihood. We report results in Table 5 and observe that DVP-VAE always has a better NLL metric compared to the deep hierarchical VAE with the same architecture and number of stochastic layers. Due to additional diffusion-based prior over pseudoinputs, DVP-VAE has slightly more trainable parameters. However, because of the small spatial dimensionality of the pseudoinputs, we were able to keep the size of the two models comparable.
356
+
357
+ Pseudoinputs type, size and prior We conduct an extensive ablation study regarding pseudoinputs. First, we train VAE with two types of pseudoinputs: DCT and Downsampled images. Moreover, we vary the spatial dimensions of the pseudoinputs between 3 × 3 and 11 × 11. We expect that a smaller pseudoinputs size will be an easier task for the prior rˆ(u), but will constitute a poorer approximation of an optimal prior. The larger pseudoinput size, on the other hand, results in a better optimal prior approximation since more information about the datapoint x is preserved. However, it becomes harder for the prior to achieve good results since we keep the prior model size fixed.
358
+
359
+ In Figure 5 we observe that the DCT-based transformation performs consistently better across various sizes and datasets.
360
+
361
+ Trainable Pseudoinputs In the VampPrior, the optimal prior is approximated using learnable pseudoinputs. In this work, on the other hand, we propose to use fixed linear transformation instead. To further
362
+
363
+ | Latent Aggr. | Size | L | AU ↑ |
364
+ |----------------|--------|-----|--------|
365
+ | MNIST | | | |
366
+ | ✗ | 0.7M | 8 | 33.2% |
367
+ | ✓ | 0.7M | 8 | 91.5% |
368
+ | OMNIGLOT | | | |
369
+ | ✗ | 1.3M | 8 | 71.3% |
370
+ | ✓ | 1.3M | 8 | 93.4% |
371
+ | CIFAR10 | | | |
372
+ | ✓ | 19.5M | 28 | 98% |
373
+
374
+ Table 4: Active Units for the DCT-VAE
375
+ with and without latent aggregation.
376
+
377
+ ![11_image_0.png](11_image_0.png)
378
+
379
+ Figure 5: Ablation study of for the pseudoinputs type
380
+ (DCT and Downsamples image), pseudoinputs prior (Diffusion model and Mixture of Gaussians) and pseudoinputs size (ranging from 3 × 3 to 11 × 11). Each configuration is trained with four different random seeds.
381
+
382
+ ![11_image_1.png](11_image_1.png)
383
+
384
+ (a) Learnable pseudoinputs (b) DCT pseudoinputs Figure 6: Samples from the pseudoinputs prior u˜ ∼ rˆ(u) (top row) and corresponding samples from the model x˜ ∼
385
+ pθ(x|z1:L, u = u˜) (other rows). Columns corresponds to models trained with different random seeds.
386
+ verify whether a fixed transformation like DCT is reasonable, we checked a learnable linear transformation.
387
+
388
+ We present in Figure 6 that the learnable linear transformation of the input exhibits unstable behavior in terms of the **quality** of learned pseudoinput. The top row of Figure 6(a) shows samples from the trained prior and the corresponding samples from the decoder. We observe that only one out of four models with learnable pseudoinputs was able to learn a visually meaningful representation of the data (seed 0), which also resulted in very high variance of the results (rows below). For other models (e.g., Seed 1 and Seed 2),
389
+ the same pseudoinput sample corresponds to completely different datapoints.
390
+
391
+ This lack of consistency motivates us to use a non-trainable transformation for obtaining pseudoinputs.
392
+
393
+ In Figure 6 (b), we show the expected behavior of sampling semantically meaningful pseudoinputs that is consistent across random seeds.
394
+
395
+ ## 7 Conclusion
396
+
397
+ | Pseudoinputs | Size | NLL↓ |
398
+ |----------------|--------|--------------|
399
+ | MNIST | | |
400
+ | ✗ | 0.6M | 78.85 (0.24) |
401
+ | ✓ | 0.7M | 77.10 (0.05) |
402
+ | OMNIGLOT | | |
403
+ | ✗ | 1.1M | 89.52 (0.23) |
404
+ | ✓ | 1.3M | 89.07 (0.10) |
405
+
406
+ In this work, we introduce DVP-VAE, a new class of deep hierarchical VAEs with the diffusion-based VampPrior. We propose to use a VampPrior approximation which allows us to use it with hierarchical VAEs with little computations overhead. We show that the proposed approach achieves state-of-the-art performance in terms of the negative log-likelihood on three benchmark datasets with much fewer parameters and stochastic layers compared to the best performing contemporary hierarchical VAEs.
407
+
408
+ ## References
409
+
410
+ Nasir Ahmed, T Natarajan, and Kamisetty R Rao. Discrete cosine transform. IEEE Transactions on Computers, 100(1):90–93, 1974.
411
+
412
+ Krizhevsky Alex. Learning multiple layers of features from tiny images. https://www. cs. toronto.
413
+
414
+ edu/kriz/learning-features-2009-TR. pdf, 2009.
415
+
416
+ Ifigeneia Apostolopoulou, Ian Char, Elan Rosenfeld, and Artur Dubrawski. Deep attentive variational inference. In *ICLR*, 2022.
417
+
418
+ Matthias Bauer and Andriy Mnih. Resampled priors for variational autoencoders. In *The 22nd International* Conference on Artificial Intelligence and Statistics, pp. 66–75. PMLR, 2019.
419
+
420
+ Alican Bozkurt, Babak Esmaeili, Jean-Baptiste Tristan, Dana H Brooks, Jennifer G Dy, and Jan-Willem van de Meent. Rate-regularization and generalization in vaes. *arXiv preprint arXiv:1911.04594*, 2019.
421
+
422
+ Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. *arXiv*, 2015. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. *ICLR*, 2017.
423
+
424
+ Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images. In ICLR, 2021.
425
+
426
+ Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *NeurIPS*, 2021. Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. In *AISTATS*, 2019.
427
+
428
+ Evgenii Egorov, Anna Kuzina, and Evgeny Burnaev. Boovae: Boosting approach for continual learning of vae. *Advances in Neural Information Processing Systems*, 34:17889–17901, 2021.
429
+
430
+ Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. *ACS central science*, 2018.
431
+
432
+ Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. Pixelvae: A latent variable model for natural images. *arXiv preprint arXiv:1611.05013*,
433
+ 2016.
434
+
435
+ Louay Hazami, Rayhane Mama, and Ragavan Thurairatnam. Efficient vdvae: Less is more. *arXiv preprint* arXiv:2203.13751, 2022.
436
+
437
+ Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *NeurIPS*, 2020. Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. In *Workshop in Advances in Approximate Bayesian Inference, NIPS*, volume 1, 2016.
438
+
439
+ Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International conference on machine learning*, pp. 8867–8887. PMLR, 2022.
440
+
441
+ Chin-Wei Huang, Jae Hyun Lim, and Aaron C Courville. A variational perspective on diffusion-based generative models and score matching. *NeurIPS*, 2021.
442
+
443
+ Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding:
444
+ An unsupervised and generative approach to clustering. *arXiv preprint arXiv:1611.05148*, 2016.
445
+
446
+ Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. *Machine learning*, 37(2):183–233, 1999.
447
+
448
+ Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In *International Conference on Artificial Intelligence and Statistics*,
449
+ pp. 2207–2217. PMLR, 2020.
450
+
451
+ Diederik P Kingma and Jimmy Lei Ba. Adam: A method for stochastic gradient descent. In *ICLR:*
452
+ international conference on learning representations, pp. 1–15, 2015.
453
+
454
+ Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In *ICLR*, 2014. Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. In *NeurIPS*,
455
+ 2021.
456
+
457
+ Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. *NeurIPS*, 2016.
458
+
459
+ Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, and Patrick van der Smagt. Learning hierarchical priors in vaes. *Advances in neural information processing systems*, 32, 2019.
460
+
461
+ Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. *Science*, 350(6266):1332–1338, 2015.
462
+
463
+ Yann LeCun. The mnist database of handwritten digits. *http://yann. lecun. com/exdb/mnist/*, 1998. Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In *International conference on machine learning*, pp. 1445–1453. PMLR, 2016.
464
+
465
+ Lars Maaløe, Marco Fraccaro, Valentin Liévin, and Ole Winther. Biva: A very deep hierarchy of latent variables for generative modeling. *NeurIPS*, 2019.
466
+
467
+ Eric Nalisnick, Lars Hertel, and Padhraic Smyth. Approximate inference for deep latent gaussian mixtures.
468
+
469
+ In *NIPS Workshop on Bayesian Deep Learning*, volume 2, pp. 131, 2016.
470
+
471
+ William B Pennebaker and Joan L Mitchell. *JPEG: Still image data compression standard*. Springer Science
472
+ & Business Media, 1992.
473
+
474
+ Adeel Pervez and Efstratios Gavves. Spectral smoothing unveils phase transitions in hierarchical variational autoencoders. *ICML*, 2021.
475
+
476
+ Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical variational models. In International conference on machine learning, pp. 324–333. PMLR, 2016.
477
+
478
+ Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *ICML*, 2014.
479
+
480
+ Hossein Sadeghi, Evgeny Andriyash, Walter Vinci, Lorenzo Buffoni, and Mohammad H Amin. Pixelvae++:
481
+ Improved pixelvae with discrete prior. *arXiv*, 2019.
482
+
483
+ Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference:
484
+ Bridging the gap. In *International conference on machine learning*, pp. 1218–1226. PMLR, 2015.
485
+
486
+ Samarth Sinha and Adji Bousso Dieng. Consistency regularization for variational auto-encoders. *NeurIPS*,
487
+ 2021.
488
+
489
+ Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *ICML*, 2015.
490
+
491
+ Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. *NeurIPS*, 2016.
492
+
493
+ Jakub Tomczak and Max Welling. Vae with a vampprior. In *AISTATS*, 2018.
494
+
495
+ Jakub M. Tomczak. *Deep Generative Modeling*. Springer Cham, 2022. Linh Tran, Maja Pantic, and Marc Peter Deisenroth. Cauchy–schwarz regularized autoencoder. *Journal of* Machine Learning Research, 23(115):1–37, 2022.
496
+
497
+ Belinda Tzen and Maxim Raginsky. Neural stochastic differential equations: Deep latent gaussian models in the diffusion limit. *arXiv*, 2019.
498
+
499
+ Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. *NeurIPS*, 2020.
500
+
501
+ Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. *NeurIPS*,
502
+ 2021.
503
+
504
+ Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. *NeurIPS*, 2017.
505
+
506
+ Antoine Wehenkel and Gilles Louppe. Diffusion priors in variational autoencoders. In *INNFICML*, 2021.
507
+
508
+ ## A Diffusion Probabilistic Models
509
+
510
+ Diffusion Probabilistic Models or Diffusion-based Deep Generative Models (Ho et al., 2020; Sohl-Dickstein et al., 2015) constitute a class of generative models that can be viewed as a special case of the Hierarchical VAEs (Huang et al., 2021; Kingma et al., 2021; Tomczak, 2022; Tzen & Raginsky, 2019). Here, we follow the definition of the variational diffusion model (Kingma et al., 2021). We use the diffusion model as a prior over the pseudoinputs u.
511
+
512
+ ## Forward Diffusion Process
513
+
514
+ The *forward* diffusion *process* runs forward in time and gradually adds noise to the input u as follows:
515
+
516
+ $$q(\mathbf{y}_{t}|\mathbf{u})={\mathcal{N}}(\mathbf{y}_{t};\alpha_{t}\mathbf{u},(1-\alpha_{t}^{2})\mathbf{I}).$$
517
+ $$(25)$$
518
+ )I). (25)
519
+ where yt are auxiliary latent variables indexed by time t ∈ [0, 1], αt is chosen in such a way that the signal-to-noise ratio decreases monotonically over time.
520
+
521
+ Since the conditionals in the forward diffusion can be seen as Gaussian linear models, we can analytically calculate the following distributions for *t > s*:
522
+
523
+ $$q(\mathbf{y}_{t}|\mathbf{y}_{s},\mathbf{u})=\mathcal{N}(\mathbf{y}_{t-1};\tilde{\mu}(\mathbf{y}_{t},\mathbf{u}),\tilde{\sigma}(t,s)\mathbf{I}),$$ where $\tilde{\mu}(\mathbf{y}_{t},\mathbf{u})=\dfrac{\alpha_{t}\left(1-\alpha_{s}^{2}\right)}{\alpha_{s}\left(1-\alpha_{t}^{2}\right)}\mathbf{y}_{t}+\dfrac{\alpha_{s}^{2}-\alpha_{t}^{2}}{\left(1-\alpha_{t}^{2}\right)\alpha_{s}}\mathbf{u},$ $$\tilde{\sigma}(t,s)=\dfrac{\left(\alpha_{s}^{2}-\alpha_{t}^{2}\right)}{\alpha_{s}^{2}}\dfrac{\left(1-\alpha_{s}^{2}\right)}{\left(1-\alpha_{t}^{2}\right)}.$$
524
+
525
+ ## Backward Diffusion Process
526
+
527
+ Similarly, we define a generative model, also referred to as the *backward* (or reverse) *process*, as a Markov chain with Gaussian transitions starting with r(y1) = N (y1|0, I). We discretize time uniformly into T
528
+ timestamps of length 1/T:
529
+
530
+ $$r({\bf y}_{0},\ldots,{\bf y}_{1})=r({\bf y}_{1})\ \prod_{i=1}^{T}r_{\gamma}({\bf y}_{(i-1)/T}|{\bf y}_{i/T}),\tag{1}$$
531
+ $$(26)$$
532
+ $$(27)$$
533
+ $$(28)$$
534
+ $$(29)$$
535
+
536
+ where rγ(yt−1|yt) = N (yt−1; µγ(yt, t), Σγ(yt, t)).
537
+
538
+ ## The Likelihood Term
539
+
540
+ Common practice is to define the likelihood term as being proportional to the first step of the forward process: r (u|y0) ∝ q (y0|u). Since we assume that the pseudoinput random variable u is continuous, we get the Gaussian likelihood distribution:
541
+
542
+ $$r\left(\mathbf{u}|\mathbf{y}_{0}\right)={\mathcal{N}}(\mathbf{u}|\mathbf{y}_{0}/\alpha_{0},\sigma_{0}^{2}/\alpha_{0}^{2}I)$$
543
+ I) (30)
544
+ Note that the same likelihood term was used for continuous atom positions in the equivariant diffusion model (Hoogeboom et al., 2022).
545
+
546
+ $$(30)$$
547
+
548
+ ## Training Objective
549
+
550
+ We can use (25) and (26) to define the variational lower bound as follows:
551
+
552
+ $$\hat{r}_{\gamma}(\mathbf{u})\geq L_{vib}(\mathbf{u},\gamma)=\underbrace{\mathbb{E}_{q(\mathbf{y}_{0}|\mathbf{u})}[\ln r(\mathbf{u}|\mathbf{y}_{0})]}_{-L_{0}}-\underbrace{D_{\mathrm{KL}}\left[q(\mathbf{y}_{1}|\mathbf{u})||r(\mathbf{y}_{1})\right]}_{L_{1}}$$ $$-\underbrace{\sum_{i=1}^{T}\mathbb{E}_{q(\mathbf{y}_{i/T}|\mathbf{u})}D_{\mathrm{KL}}\left[q(\mathbf{y}_{(i-1)/T}|\mathbf{y}_{i/T},\mathbf{u})\|r_{\gamma}(\mathbf{y}_{(i-1)/T}|\mathbf{y}_{i/T})\right]}_{L_{T}}.$$
553
+
554
+ Here we refer to L0 as the reconstruction loss, L1 as the prior loss, and LT as the diffusion loss with T steps.
555
+
556
+ ## B Training Stability: Depth
557
+
558
+ We report the ℓ2-norm of the gradient for each iteration training for models of different stochastic depth in Figure 7. We observe very high gradient norms for the first few training iterations but no spikes at later stages of training. This happens because we did not initiate the KL-term to be zero at the beginning of the training and it tends to be large at the initialization step for very deep VAEs. However, we did not observe our model diverge since the gradient is clipped to reasonable values (200 in these experiments) and after the first few gradient updates, the KL-term goes to reasonable values. Moreover, we plot training and validation losses in Figure 8.
559
+
560
+ ![17_image_0.png](17_image_0.png)
561
+
562
+ Figure 7: The gradient norm at each training iteration. Models were trained on CIFAR10 with different stochastic depths L.
563
+
564
+ ![17_image_1.png](17_image_1.png)
565
+
566
+ Figure 8: The ELBO per pixel on train (left) and validation (right) dataset. Models were trained on CIFAR10 with different stochastic depths L.
567
+
568
+ ## C Model Details C.1 Hyperparameters
569
+
570
+ In Table 6, we report all hyperparameter values that were used to train the DVP-VAE.
571
+
572
+ The pseudoinput prior We use the diffusion generative model as a prior over pseudoinputs. As a backbone, we use the UNet implementation from (Dhariwal & Nichol, 2021), available on GitHub2 with the hyperparameters provided in Table 6.
573
+
574
+ | MNIST | OMNIGLOT | CIFAR10 | | |
575
+ |-------------------------|-------------------|-----------|----------------------|------|
576
+ | Optimization | # Epochs | 300 | 500 | 3000 |
577
+ | Batch Size (per GPU) | 250 | 250 | 128 | |
578
+ | # GPUs | 1 | 1 | 1 | |
579
+ | Optimizer | Adamax | Adamax | Adamax | |
580
+ | Scheduler | Cosine | Cosine | Cosine | |
581
+ | Starting LR | 1e-2 | 1e-2 | 3e-3 | |
582
+ | End LR | 1e-5 | 1e-4 | 1e-4 | |
583
+ | LR warmup (epochs) | 2 | 2 | 5 | |
584
+ | Weight Decay | 1e-6 | 1e-6 | 1e-6 | |
585
+ | EMA rate | 0.999 | 0.999 | 0.999 | |
586
+ | Grad. Clipping | 5 | 2 | 150 | |
587
+ | log σ clipping | -10 | -10 | -10 | |
588
+ | L | 8 | 8 | 28 | |
589
+ | Latent Sizes | | | | |
590
+ | Latents | 10 × 322 , | | | |
591
+ | 4 × 142 , | 4 × 142 , | 8 × 162 . | | |
592
+ | 2 . | 4 × 7 2 . | 6 × 8 2 , | | |
593
+ | 4 × 7 | 4 × 4 2 . | | | |
594
+ | Latent Width (channels) | 1 | 1 | 3 | |
595
+ | Context Size | 1 × 7 × 7 | 1 × 5 × 5 | 3 × 7 × 7 | |
596
+ | Architecture | Nenc blocks | 3 | 3 | 4 |
597
+ | ResBlock Cin | 32 | 80 | 128 | |
598
+ | ResBlock Chid | 32 | 40 | 96 | |
599
+ | Activation | SiLU | SiLU | SiLU | |
600
+ | Likelihood | Bernoulli | Bernoulli | Discretized Logisitc | |
601
+ | # mixture comp | - | - | 10 | |
602
+ | Context Prior | # Diffusion Steps | 50 | 50 | 50 |
603
+ | # Scales in UNet | 1 | 1 | 1 | |
604
+ | # ResBlocks per Scale | 2 | 2 | 3 | |
605
+ | # Channels | 16 | 16 | 32 | |
606
+ | β schedule | linear | linear | linear | |
607
+ | log SNR min | -6 | -6 | -10 | |
608
+ | log SNR max | 7 | 7 | 7 | |
609
+
610
+ Table 6: Full list of hyperparameters.
611
+ 2https://github.com/openai/guided-diffusion
612
+
613
+ ## D Samples
614
+
615
+ In Figure 9, we present non-cherry-picked unconditional samples from our DVP-VAE and in Figure 10 we present non-cherry-picked unconditional samples from the pseudoinput prior rˆ(u).
616
+
617
+ ![19_image_1.png](19_image_1.png)
618
+
619
+ ![19_image_2.png](19_image_2.png)
620
+
621
+ Figure 10: Uncoditional samples of pseudoinputs.
622
+ Figure 9: Uncoditional samples.
623
+
624
+ ![19_image_0.png](19_image_0.png)
625
+
NUkEoZ7Toa/NUkEoZ7Toa_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 20,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 20,
14
+ "code": 0,
15
+ "table": 6,
16
+ "equations": {
17
+ "successful_ocr": 52,
18
+ "unsuccessful_ocr": 1,
19
+ "equations": 53
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }