tmlr-md-dump / AQk0UsituG /AQk0UsituG.md
RedTachyon's picture
Upload folder using huggingface_hub
ddf2eb5 verified
|
raw
history blame
90.4 kB

Variational Learning Ista

Fabio Valerio Massoli fmassoli@qti.qualcomm.com Qualcomm AI Research∗ Christos Louizos clouizos@qti.qualcomm.com Qualcomm AI Research∗ Arash Behboodi abehboodi@qti.qualcomm.com Qualcomm AI Research∗ Reviewed on OpenReview: https: // openreview. net/ forum? id= AQk0UsituG

Abstract

Compressed sensing combines the power of convex optimization techniques with a sparsityinducing prior on the signal space to solve an underdetermined system of equations. For many problems, the sparsifying dictionary is not directly given, nor its existence can be assumed.

Besides, the sensing matrix can change across different scenarios. Addressing these issues requires solving a sparse representation learning problem, namely dictionary learning, taking into account the epistemic uncertainty of the learned dictionaries and, finally, jointly learning sparse representations and reconstructions under varying sensing matrix conditions. We address both concerns by proposing a variant of the LISTA architecture. First, we introduce Augmented Dictionary Learning ISTA (A-DLISTA), which incorporates an augmentation module to adapt parameters to the current measurement setup. Then, we propose to learn a distribution over dictionaries via a variational approach, dubbed Variational Learning ISTA (VLISTA). VLISTA exploits A-DLISTA as the likelihood model and approximates a posterior distribution over the dictionaries as part of an unfolded LISTA-based recovery algorithm. As a result, VLISTA provides a probabilistic way to jointly learn the dictionary distribution and the reconstruction algorithm with varying sensing matrices. We provide theoretical and experimental support for our architecture and show that our model learns calibrated uncertainties.

1 Introduction

By imposing a prior on the signal structure, compressed sensing solves underdetermined inverse problems. Canonical examples of signal structure and sensing medium are sparsity and linear inverse problems.

Compressed sensing aims at reconstructing an unknown signal of interest, s ∈ R n, from a set of linear measurements, y ∈ R m, acquired by means of a linear transformation, Φ ∈ R m×n where m < n. Due to the underdetermined nature of the problem, s is typically assumed to be sparse in a given basis. Hence, s = Ψx, where Ψ ∈ R n×bis a matrix whose columns represent the sparsifying basis vectors, and x ∈ R bis the sparse representation of s. Therefore, given noiseless observations y = Φs, of an unknown signal, s = Ψx, we seek to solve the LASSO problem:

argminxyΦΨx22+ρx1\operatorname*{argmin}_{\mathbf{x}}\|\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}\mathbf{x}\|_{2}^{2}+\rho\|\mathbf{x}\|_{1} x 2 + ρ∥x∥1 (1) where ρ is a constant scalar controlling the sparsifying penalty. Iterative algorithms, such as Iterative Soft-Thresholding Algorithm (ISTA) (Daubechies et al., 2004), represent a popular approach to solving ∗Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.

(1)(1)

such problems. A number of studies have been conducted to improve compressed sensing solvers. A typical approach involves unfolding iterative algorithms as layers of neural networks and learning parameters end-toend (Gregor & LeCun, 2010). Such ML algorithms are typically trained by minimizing the reconstruction objective:

L(x,x^T)=ExD[xx^T22]{\mathcal{L}}(\mathbf{x},{\hat{\mathbf{x}}}_{T})=\mathbb{E}_{\mathbf{x}\sim{\mathcal{D}}}[\|\mathbf{x}-{\hat{\mathbf{x}}}_{T}\|_{2}^{2}] (2)(2) ] (2) where the expected value is taken over data sampled from D, and the subscript "T" refers to the last step, or layer, of the unfolded model. Variable sensing matrices and unknown sparsifying dictionaries are some of the main challenges of data-driven approaches. By learning a dictionary and including it in optimization iterations, the work in Aberdam et al. (2021); Schnoor et al. (2022) aims to address these issues. However, data samples might not have exact sparse representations, so no ground truth dictionary is available. The issue can be more severe for heterogeneous datasets where the dictionary choice might vary from one sample to another. A real-world example would be the channel estimation problem in a Multi-Input Multi-Output (MIMO) mmwave wireless communication system (Rodríguez-Fernández et al., 2018). Such a problem can be cast as an inverse problem of the form y = ΦΨx and solved using compressive sensing techniques. The sensing matrix, Φ, represents the so-called beamforming matrix while the dictionary, Ψ, represents the sparsifying basis for the wireless channel itself. Typically, Φ changes from one set of measurements to the next and the channel model might require different basis vectors across time. Adaptive acquisition is another example of application: in MRI image reconstruction, the acquisition step can be adaptive. Here, the sensing matrix is sampled from a known distribution to reconstruct the signal. Therefore, given the adaptive nature of the process, each data sample is characterized by a different Φ (Bakker et al., 2020; Yin et al., 2021).

Our Contribution A principled approach to this problem would be to leverage a Bayesian framework and define a distribution over the dictionaries with proper uncertainty quantification. We follow two steps to accomplish this goal. First, we introduce Augmented Dictionary Learning ISTA (A-DLISTA), an augmented version of the Learning Iterative Soft-Thresholding Algorithm (LISTA)-like model, capable of adapting some of its parameters to the current measurement setup. We theoretically motivate its design and empirically prove its advantages compared to other non-adaptive LISTA-like models in a non-static measurement scenario, i.e., considering varying sensing matrices. Finally, to learn a distribution over dictionaries, we introduce Variational Learning ISTA (VLISTA), a variational formulation that leverages A-DLISTA as the likelihood model. VLISTA refines the dictionary iteratively after each iteration based on the outcome of the previous layer. Intuitively, our model can be understood as a form of a recurrent variational autoencoder, e.g., Chung et al. (2015), where at each iteration of the algorithm we have an approximate posterior distribution over the dictionaries conditioned on the outcome of the previous iteration. Moreover, VLISTA provides uncertainty estimation to detect Out-Of-Distribution (OOD) samples. We train A-DLISTA using the same objective as in Equation 2 while for VLISTA we maximize the ELBO (Equation 15). We refer the reader to Appendix D for the detailed derivation of the ELBO. Behrens et al. (2021) proposed an augmented version of LISTA, termed Neurally Augmented ALISTA (NALISTA). However, there are key differences with A-DLISTA. In contrast to NALISTA, our model adapts some of its parameters to the current sensing matrix and learned dictionary.

Hypothetically, NALISTA could handle varying sensing matrices. However, that comes at the price of solving, for each datum, the inner optimization step to evaluate the W matrix. Finally, while NALISTA uses an LSTM as augmentation network, A-DLISTA employs a convolutional neural network (shared across all layers).

Such a difference reflects the type of dependencies between layers and input data that the networks try to model. We report in Appendix B and Appendix C detailed discussions about the theoretical motivation and architectural design for A-DLISTA.

Our work's main contributions can be summarized as follows:

  • We design an augmented version of a LISTA-like type of model, dubbed A-DLISTA, that can handle non-static measurement setups, i.e. per-sample sensing matrices, and adapt parameters to the current data instance.

  • We propose VLISTA that learns a distribution over sparsifying dictionaries. The model can be interpreted as a Bayesian LISTA model that leverages A-DLISTA as the likelihood model.

  • VLISTA adapts the dictionary to optimization dynamics and therefore can be interpreted as a hierarchical representation learning approach, where the dictionary atoms gradually permit more refined signal recovery.

  • The dictionary distributions can be used successfully for out-of-distribution sample detection.

The remaining part of the paper is organized as follows. In section 2 we briefly report related works relevant to the current research. In section 3 and section 4 we introduce some background notions and details of our model formulations, respectively. Datasets, baselines, and experimental results are described in section 5.

Finally, we draw our conclusion in section 6.

2 Related Works

In compressed sensing, recovery algorithms have been extensively analyzed theoretically and numerically (Foucart & Rauhut, 2013). One of the most prominent approaches is using iterative algorithms, such as: ISTA (Daubechies et al., 2004), Approximate message passing (AMP) (Donoho et al., 2009) Orthogonal Matching Pursuit (OMP) (Pati et al., 1993; Davis et al., 1994), and Iterative Hard-Thresholding Algorithm (IHTA) (Blumensath & Davies, 2009). These algorithms have associated hyperparameters, including the number of iterations and soft threshold, which can be adjusted to better balance performance and complexity. With unfolding iterative algorithms as layers of neural networks, these parameters can be learned in an end-to-end fashion from a dataset see, for instance, some variants Zhang & Ghanem (2018); Metzler et al. (2017); yang et al. (2016); Borgerding et al. (2017); Sprechmann et al. (2015). In previous studies by Zhou et al. (2009; 2012), a non-parametric Bayesian method for dictionary learning was presented. The authors focused on a fully Bayesian joint compressed sensing inversion and dictionary learning, where the dictionary atoms were drawn and fixed beforehand. Bayesian compressive sensing (BCS) (Ji et al., 2008) uses relevance vector machines (RVMs) (Tipping, 2001) and a hierarchical prior to model distributions of each entry. This line of work quantifies the uncertainty of recovered entries while assuming a fixed dictionary. Our current work differs by accounting for uncertainty in the unknown dictionary by defining a distribution over it. Learning ISTA was initially introduced by Gregor & LeCun (2010). Since then, many works have followed, including those by Behrens et al. (2021); Liu et al. (2019); Chen et al. (2021); Wu et al. (2020). These subsequent works provide guidelines for improving LISTA, for example, in convergence, parameter efficiency, step size and threshold adaptation, and overshooting. However, they assume fixed and known sparsifying dictionaries and sensing matrices. Researches by Aberdam et al. (2021); Behboodi et al. (2022); Schnoor et al. (2022) have explored ways to relax these assumptions, including developing models that can handle varying sensing matrices and learn dictionaries. The authors in Schnoor et al. (2022); Behboodi et al. (2022) provide an architecture that can both incorporate varying sensing matrices and learn dictionaries. However, their focus is on the theoretical analysis of the model. Furthermore, there are theoretical studies on the convergence and generalization of unfolded networks, see for example: Giryes et al. (2018); Pu et al. (2022); Aberdam et al.

(2021); Pu et al. (2022); Chen et al. (2018); Behboodi et al. (2022); Schnoor et al. (2022). Our paper builds on these ideas by modelling a distribution over dictionaries and accounting for epistemic uncertainty. Previous studies have explored theoretical aspects of unfolded networks, such as convergence and generalization, and we contribute to this body of research by considering the impact of varying sensing matrices and dictionaries.

The framework of variational autoencoders (VAEs) enables the learning of a generative model through latent variables (Kingma & Welling, 2013; Rezende et al., 2014). When there are data-sample-specific dictionaries in our proposed model, it reminisces extensions of VAEs to the recurrent setting (Chung et al., 2015; 2016), which assumes a sequential structure in the data and imposes temporal correlations between the latent variables. Additionally, there are connections and similarities to Markov state-space models, such as the ones described in Krishnan et al. (2017). By using global dictionaries in VLISTA, the model becomes a variational Bayesian Recurrent Neural Network. Variational Bayesian neural networks were first introduced in Blundell et al. (2015), with independent priors and variational posteriors for each layer. This work has been extended to recurrent settings in Fortunato et al. (2019). The main difference between these works and our setting is the prior and variational posterior. At each step, the prior and variational posterior are conditioned on previous steps instead of being fixed across steps.

3 Background 3.1 Sparse Linear Inverse Problems

We consider linear inverse problems of the form: y = Φs, where we have access to a set of linear measurements y ∈ R m of an unknown signal s ∈ R n, acquired through the forward operator Φ ∈ R m×n. Typically, in compressed sensing literature, Φ is called the sensing, or measurements, matrix and it represents an underdetermined system of equations for m < n. The problem of reconstructing s from (y, Φ) is ill-posed due to the shape of the forward operator. To uniquely solve for s, the signal is assumed to admit a sparse representation, x ∈ R b, in a given basis, {ei ∈ R n} b i=0. The ei vectors are called atoms and are collected as the columns of a matrix Ψ ∈ R n×btermed the sparsifying dictionary. Therefore, the problem of estimating s, given a limited number of observations y through the operator Φ, is translated to a sparse recovery problem: x ∗ = arg minx∥x∥0 s.t. y = ΦΨx. Given that the l0 pseudo-norm requires solving an NP-hard problem, the l1 norm is used instead as a convex relaxation of the problem. A proximal gradient descent-based approach for solving the problem yields the ISTA algorithm (Daubechies et al., 2004; Beck & Teboulle, 2009):

xt=ηθt(xt1+γt(ΦΨ)T(yΦΨxt1)),\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}\mathbf{\Psi})^{T}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}\mathbf{x}_{t-1})\right), (3)\left({\boldsymbol{3}}\right) T(y − ΦΨxt−1), (3) where t is the index of the current iteration, xt (xt−1) is the reconstructed sparse vector at the current (previous) layer, and θt and γt are the soft-threshold and step size hyperparameters, respectively. Specifically, θt characterizes the soft-threshold function given by: ηθt (x) = sign(x)(|x| − θt)+. In the ISTA formulation, those two parameters are shared across all the iterations: γt, θt → γ, θ. In what follows, we use the terms "layers" and "iterations" interchangeably when describing ISTA and its variations.

3.2 Lista

LISTA (Gregor & LeCun, 2010) is an unfolded version of the ISTA algorithm in which each iteration is parametrized by learnable matrices. Specifically, LISTA reinterprets Equation 3 as defining the layer of a feed-forward neural network implemented as Sθt (Vtxt−1 + Wty) where Vt,Wt are learnt from a dataset. In that way, those weights implicitly contain information about Φ and Ψ that are assumed to be fixed. As LISTA, also its variations, e.g., Analytic LISTA (ALISTA) (Liu et al., 2019), NALISTA (Behrens et al., 2021) and HyperLISTA (Chen et al., 2021), require similar constraints such as a fixed dictionary and sensing matrix to reach the best performance. However, there are situations where one or none of the conditions are met (see examples in section 1).

4 Method 4.1 Augmented Dictionary Learning Ista (A-Dlista)

To deal with situations where Ψ is unknown and Φ is changing across samples, one can unfold the ISTA algorithm and re-parametrize the dictionary as a learnable matrix. Such an algorithm is termed Dictionary Learning ISTA (DLISTA) (Pezeshki et al., 2022; Behboodi et al., 2022; Aberdam et al., 2021) and, similarly to Equation 3, each layer is formulated as:

xt=ηθt(xt1+γt(ΦΨt)T(yΦΨtxt1)),(1)\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}\mathbf{\Psi}_{t})^{T}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})\right),\tag{1}

with one last linear layer mapping x to the reconstructed signal s. The model can be trained end-to-end to learn all θt, γt, Ψt. Differently from ISTA (Equation 3), DLISTA (Equation 4) learns a dictionary specific for each layer, indicated by the subscript "t". The model can be trained end-to-end to learn all θt, γt, Ψt.

The base model is similar to (Behboodi et al., 2022; Aberdam et al., 2021). However, it requires additional changes. Consider the t-th layer of DLISTA with the varying sensing matrix Φk and define the following

(4)\left(4\right)

parameters:

μ~(t,Φk):=max1ijN((ΦkΨt)i)(ΦkΨt)j(5)\tilde{\mu}(t,\mathbf{\Phi}^{k}):=\max_{1\leq i\neq j\leq N}\left|\left(\left(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\right)_{i}\right)^{\top}\left(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\right)_{j}\right|\tag{5} $$\tilde{\mu}{2}(t,\mathbf{\Phi}^{k}):=\max{1\leq i\leq N}\left|\left(\left(\mathbf{\Phi}^{k}\mathbf{\Psi}{t}\right){i}\right)^{\top}\left(\mathbf{\Phi}^{k}(\mathbf{\Psi}{t}-\mathbf{\Psi}{o})\right){j}\right|$$ (6) $$\delta(\gamma,t,\mathbf{\Phi}^{k}):=\max{t}\left|1-\gamma\left|\left(\mathbf{\Phi}^{k}\mathbf{\Psi}{t}\right){i}\right|\right|\right|\tag{7}$$

where µ˜ is the mutual coherence of ΦkΨt (Foucart & Rauhut, 2013, Chapter 5) and µ˜2 is closely connected to generalized mutual coherence (Liu et al., 2019). However, in contrast to the generalized mutual coherence, µ˜2 includes the diagonal inner product for i = j. Finally, δ(·) is reminiscent of the restricted isometry property (RIP) constant (Foucart & Rauhut, 2013), a key condition for many recovery guarantees in compressed sensing. When the columns of the matrix ΦkΨt are normalized, the choice of γ = 1 yields δ(γ, t, Φk) = 0.

The following proposition provides conditions on each layer to improve the reconstruction error.

Proposition 4.1. Suppose that y k = ΦkΨox∗, where x∗ is the ground truth sparse vector with support supp(x∗) = S, and Ψo is the ground truth dictionary. For DLISTA iterations given as

xt=ηθt(xt1+γt(ΦkΨt)T(ykΦkΨtxt1)),\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})^{T}(\mathbf{y}^{k}-\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})\right), k − ΦkΨtxt−1), (8) we have:

  1. If for all t, the pairs (θt, γt, Ψt) satisfy

(δ)({\boldsymbol{\delta}})

$\gamma_{t}\left(\tilde{\mu}\left|\mathbf{x}{*}-\mathbf{x}{t-1}\right|{1}+\tilde{\mu}{2}\left|\mathbf{x}{*}\right|{1}\right)\leq\theta_{t},$ (g)({\mathfrak{g}})

then there is no false positive in each iteration. In other words, for all t, we have supp(xt) ⊆ supp(x∗).

  1. Assuming that the conditions of the last step hold, then we get the following bound on the error:

$|\mathbf{x}{t}-\mathbf{x}{}|{1}\leq\left(\delta(\gamma{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)|\mathbf{x}{t-1}-\mathbf{x}{}|{1}+\gamma{t}\tilde{\mu}{2}|S|,|\mathbf{x}{*}|{1}+|S|\theta{t}$.
We provide the derivation of Proposition 4.1 together with additional theoretical results in Appendix B.

4_image_0.png

Proposition 4.1 provides insights about the choice of γt and θt, and also suggests that (δ(γt) + γtµ˜(|S| − 1)) needs to be smaller than one to reduce the error at each step.

Figure 1: Models architectures. Left: A-DLISTA architecture. Each blue block represents a single ISTA-like iteration parametrized by the dictionary Ψt, the threshold and step size {θ i t , γi t}. The red blocks represent the augmentation network (with shared parameters across layers) that adapts {θ i t , γi t} for layer t based on the dictionary Ψt and the current measurement setup Φifor the i−th data sample. Right: VLISTA (inference) architecture. The red and blue blocks correspond to the same operations as for A-DLISTA. The pink blocks represent the posterior model used to refine the dictionary based on input data {y i, Φi} and the sparse vector reconstructed at layer t, xt. Upon examining Proposition 4.1, it becomes evident that γt and θt play a key role in the convergence of the algorithm. However, there is a trade-off to consider when making these choices. For instance, suppose we set Algorithm 1 Augmented Dictionary Learning ISTA (A-DLISTA) - Inference Algorithm Require: D = {(y i, Φi)} N−1 i=0 - the sensing matrix changes across samples; augmentation model fΘ x i 0 ← 0 for t = 1*, . . . , T* do (θ i t , γi t ) ← fΘ(Φi, Ψt) ▷ Augmentation step g ← y i − (ΦiΨt) T ΦiΨtx i t−1 u ← x i t−1 + γ i t g x i t = ηθ i t (u) end for return x i T ; Algorithm 2 Variational Learning ISTA (VLISTA) - Inference Algorithm Require: D = {(y i, Φi)} N−1 i=0 - the sensing matrix changes across samples; augmentation model fΘ; posterior model fϕ x i 0 ← 0 for t = 1*, . . . , T* do (µt,σ 2t) ← fϕ(x i t−1 , y i, Φi) ▷ Posterior parameters estimation Ψt ∼ N (Ψt|µt,σ 2t) ▷ Dictionary sampling (θ i t , γi t ) ← fΘ(Φi, Ψt) ▷ Augmentation step g ← y i − (ΦiΨt) T ΦiΨtx i t−1 u ← x i t−1 + γ i t g x i t = ηθ i t (u) end for return x i T ; θt and decrease γt. In that case, we may ensure good support selection, but it could also increase δ(γt). In situations where the sensing matrix remains fixed, the network can possibly learn optimal choices through end-to-end training. However, when the sensing matrix Φ differs across various data samples (i.e., Φ → Φi), it is no longer guaranteed that there exists a unique choice of γt and θt for all Φi. Since these parameters can be determined when Φ and Ψt are fixed, we suggest utilizing an augmentation network to determine γt and θt from each pair of Φi and Ψt. For a more thorough theoretical analysis, please refer to Appendix B.

We show in Figure 1 (left plot) the resulting A-DLISTA model. At each layer, A-DLISTA performs two basic operations, namely, soft-threshold (blue blocks in Figure 1) and augmentation (red blocks in Figure 1).

The former represents an ISTA-like iteration parametrized by the set of weights {Ψt, θi t , γi t}, whilst the latter is implemented using a convolutional neural network. As shown in the figure, the augmentation network takes as input the sensing matrix for the given data sample, Φi, together with the dictionary learned at the layer for which the augmentation model will generate the θ i t and γ i t parameters: (θ i t , γi t ) = fΘ(Φi, Ψt), where Θ are the augmentation models' trainable parameters. Through such an operation, A-DLISTA adapts the soft-threshold and step size of each layer to the current data sample. The inference algorithmic description of A-DLISTA is given in Algorithm 1. We report more details about the augmentation network in the supplementary materials (Appendix C).

4.2 Variational Learning Ista

Although A-DLISTA possesses adaptivity to data samples, it still assumes the existence of a ground truth dictionary. We relax such a hypothesis by defining a probability distribution over Ψt and formulating a variational approach, titled VLISTA, to solve the dictionary learning and sparse recovery problems jointly. To forge our variational framework whilst retaining the helpful adaptivity property of A-DLISTA, we re-interpret the soft-thresholding layers of the latter as part of a likelihood model defining the output mean for the reconstructed signal. Given its recurrent-like structure (Chung et al., 2015), we equip VLISTA with a conditional trainable prior where the condition is given by the dictionary sampled at the previous iteration.

Therefore, the full model comprises three components, namely, the conditional prior pξ(·), the variational posterior qϕ(·), and the likelihood model, pΘ(·). All components are parametrized by neural networks whose outputs represent the parameters for the underlying probability distribution. In what follows, we describe more in detail the various building blocks of the VLISTA model.

4.2.1 Prior Model

The conditional prior, pξ(Ψt|Ψt−1), is modelled as a Gaussian distribution whose parameters are conditioned on the previously sampled dictionary. We implement pξ(·) as a neural network, fξ(·) = [f µ ξ1 ◦gξ0 (·), fσ 2 ξ2 ◦gξ0 (·)], with trainable parameters ξ = {ξ0, ξ1, ξ2}. The model's architecture comprises a shared convolutional block followed by two branches generating the Gaussian distribution's mean and standard deviation, respectively.

Therefore, at layer t, the prior conditional distribution is given by: pξ(Ψt|Ψt−1) = Qi,j N (Ψt;i,j |µt;i,j = f µ ξ1 (gξ0 (Ψt−1))i,j ; σt;i,j = f σ 2 ξ2 (gξ0 (Ψt−1))i,j ), where the indices i, j run over the rows and columns of Ψt. To simplify our expressions, we will abuse notation and refer to distributions like the former one as:

pξ(ΨtΨt1)=N(Ψtμt;σ2t),\mboxwhere(10)p_{\xi}(\mathbf{\Psi}_{t}|\mathbf{\Psi}_{t-1})={\cal N}(\mathbf{\Psi}_{t}|\mathbf{\mu}_{t};\mathbf{\sigma}^{2}{}_{t}),\quad\mbox{where}\tag{10} $\mathbf{\mu}{t}=f{\xi_{1}}^{\mu}(g_{\xi_{0}}(\mathbf{\Psi}{t-1}));\quad\mathbf{\sigma}^{2}{}{t}=f_{\xi_{2}}^{\sigma^{2}}(g_{\xi_{0}}(\mathbf{\Psi}_{t-1}))$ We will use the same type of notation throughout the rest of the manuscript to simplify formulas. The prior design allows for enforcing a dependence of the dictionary at iteration t to the one sampled at the previous iteration. Thus allowing us to refine Ψt as the iterations proceed. The only exception is the prior imposed over the dictionary at t = 1, where there is no previously sampled dictionary. To handle this exception, we assume a standard Gaussian distributed Ψ1. The joint prior distribution over the dictionaries for VLISTA is given by:

pξ(Ψ1:T)=N(Ψ10;1)t=2TN(Ψtμt;σ2t)(1)p_{\xi}(\Psi_{1:T})={\cal N}(\Psi_{1}|{\bf0};{\bf1})\prod_{t=2}^{T}{\cal N}(\Psi_{t}|\mu_{t};\sigma^{2}{}_{t})\tag{1} (11)(11) (12)(12)

where µt and σ 2t are defined in Equation 10.

4.2.2 Posterior Model

Similarly to the prior model, the variational posterior too is modelled as a Gaussian distribution parametrized by a neural network fϕ(·) = [f µ ϕ1 ◦ hϕ0 (·), fσ 2 ϕ2 ◦ hϕ0 (·)] that outputs the mean and variance for the underlying probability distribution

qϕ(Ψtxt1,yi,Φi)=N(Ψtμt;σ2t),whereq_{\phi}(\mathbf{\Psi}_{t}|\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i})={\cal N}(\mathbf{\Psi}_{t}|\mathbf{\mu}_{t};\mathbf{\sigma}^{2}{}_{t}),\quad\mathrm{where} $$\mathbf{\mu}{t}=f{\phi_{1}}^{\mu}(h_{\phi_{0}}(\mathbf{x}{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i}));\quad\quad\mathbf{\sigma}^{2}{}{t}=f_{\phi_{2}}^{\sigma^{2}}(h_{\phi_{0}}(\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i}))$$

The posterior distribution for the dictionary, Ψt, at layer t is conditioned on the data, {y i, Φi}, as well as on the reconstructed signal at the previous layer, xt−1. Therefore, the joint posterior probability over the dictionaries is given by:

qϕ(Ψ1:Tx1:T,yi,Φi)=t=1Tqϕ(Ψtxt1,yi,Φi)(1)q_{\phi}(\mathbf{\Psi}_{1:T}|\mathbf{x}_{1:T},\mathbf{y}^{i},\mathbf{\Phi}^{i})=\prod_{t=1}^{T}q_{\phi}(\mathbf{\Psi}_{t}|\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i})\tag{1} (13)(13)

When considering our selection of Gaussian distributions for our prior and posterior models, we prioritized computational and implementation convenience. However, it's important to note that our framework is not limited to this distribution family. As long as the distributions used are reparametrizable (Kingma & Welling, 2013), meaning that we can obtain gradients of random samples with respect to their parameters and we can evaluate and differentiate their density, VLISTA can support any flexible distribution family. This includes mixtures of Gaussians to incorporate heavier tails and distributions resulting from normalizing flows (Rezende & Mohamed, 2015).

4.2.3 Likelihood Model

The soft-thresholding block of A-DLISTA is at the heart of the reconstruction module. Similarly to the prior and posterior, the likelihood distribution is modelled as a Gaussian parametrized by the output of an A-DLISTA block. In particular, the network generates the mean vector for the Gaussian distribution while we treat the standard deviation as a tunable hyperparameter. By combining these elements, we can formulate the joint log-likelihood distribution as:

7_image_0.png

Figure 2: VLISTA graphical model. Dependencies on y i and Φi are factored out for simplicity.

The sampling is done only based on the posterior qϕ(Ψt|xt−1, y i, Φi). Dashed lines represent variational approximations.

logpΘ(x1:T=xgtiΨ1:T,yi,Φi)=t=1TlogN(xgtiμt,σt2),where(14)\log\,p_{\Theta}(\mathbf{x}_{1:T}=\mathbf{x}_{gt}^{i}|\mathbf{\Psi}_{1:T},\mathbf{y}^{i},\mathbf{\Phi}^{i})=\sum_{t=1}^{T}\log\mathcal{N}(\mathbf{x}_{gt}^{i}|\mathbf{\mu}_{t},\mathbf{\sigma}^{2}_{t}),\quad\text{where}\tag{14} $$\mathbf{\mu}{t}=\text{A-DLISTA}(\mathbf{\Psi}{t},\mathbf{x}{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i};\Theta);\quad\mathbf{\sigma}^{2}{t}=\delta$$

where δ is a hyperparameter of the network, x i gt represents the ground truth value for the underlying unknown sparse signal for the i-th data sample, Θ is the set of A-DLISTA's parameters, and pΘ(x1:T = x i gt|·) represents the likelihood for the ground truth x i gt, at each time step t ∈ [1, T], under the likelihood model given the current reconstruction. Note that in Equation 14 we use the same x i gt through the entire sequence t ∈ [1, T].

We report in Figure 1 (right plot) the inference architecture for VLISTA. It is crucial to note that during inference VLISTA does not require the prior model which is used for training only. Additionally, its graphical model and algorithmic description are shown in Figure 2 and Algorithm 2, respectively. We report further details concerning the architecture of the models and the objective function in the supplementary material (Appendix C and Appendix D).

4.2.4 Training Objective

Our approach involves training all VLISTA components in an end-to-end fashion. To accomplish that we maximize the Evidence Lower Bound (ELBO):

ELBO = X T t=1 EΨ1:t∼qϕ(Ψ1:t|x0:t−1,Di ) hlog pΘ(xt = x i gt|Ψ1:t, Di) i(15) − X T t=2 EΨ1:t−1∼qϕ(Ψ1:t−1|xt−1,Di ) hDKLqϕ(Ψt|xt−1, Di) ∥ pξ(Ψt|Ψt−1) i − DKLqϕ(Ψ1|x0, Di) ∥ p(Ψ1) As we can see from Equation 15, the ELBO comprises three terms. The first term is the sum of expected log-likelihoods of the target signal at each time step. The second term is the sum of KL divergences between the approximate posterior and the prior at each time step. The third term is the KL divergence between the approximate posterior at the initial time step and a prior. In our implementation, we set to "T" the number of layers and initialize the input signal to zero.

To evaluate the likelihood contribution in Equation 15, we marginalize over dictionaries sampled from the posterior qϕ(Ψ1:t|x0:t−1, Di). In contrast, the last two terms in the equation represent the KL divergence contribution between the prior and posterior distributions. It's worth noting that the prior in the last term is not conditioned on the previously sampled dictionary, given that pξ(Ψ1) → p(Ψ1) = N (Ψ1|0; 1) (refer to Equation 10 and Equation 11). We refer the reader to Appendix D for the derivstion of the ELBO.

5 Experiments 5.1 Datasets And Baselines

We evaluate our models' performance by comparing them against classical and ML-based baselines on three datasets: MNIST, CIFAR10, and a synthetic dataset. Concerning the synthetic dataset, we follow a similar approach as in Chen et al. (2018); Liu & Chen (2019); Behrens et al. (2021). However, in contrast to the mentioned works, we generate a different Φ matrix for each datum by sampling i.i.d. entries from a standard Gaussian distribution. We generate the ground truth sparse signals by sampling the entries from a standard Gaussian and setting each entry to be non-zero with a probability of 0.1. We generate 5K samples and use 3K for training, 1K for model selection, and 1K for testing. Concerning the MNIST and CIFAR10, we train the models using the full images, without applying any crop. For CIFAR10, we gray-scale and normalize the images. We generate the corresponding observations, y i, by multiplying each sensing matrix with the ground truth image: y i = Φis i. We compare the A-DLISTA and VLISTA models against classical and ML baselines.

Our classical baselines use the ISTA algorithm, and we pre-compute the dictionary by either considering the canonical or the wavelet basis or using the SPCA algorithm. Our ML baselines use different unfolded learning versions of ISTA, such as LISTA. To demonstrate the benefits of adaptivity, we perform an ablation study on A-DLISTA by removing its augmentation network and making the parameters θt, γt learnable only through backpropagation. We refer to the non-augmented version of A-DLISTA as DLISTA. Therefore, for DLISTA, θt and γt cannot be adapted to the specific input sensing matrix. Moreover, we consider BCS (Ji et al., 2008) as a specific Bayesian baseline for VLISTA. Finally, we conduct Out-Of-Distribution (OOD) detection experiments. We fixed the number of layers to three for all ML models to compare their performance. The classical baselines do not possess learnable parameters. Therefore, we performed an extensive grid search to find the best hyperparameters for them. More details concerning the training procedure and ablation studies can be found in Appendix D and Appendix F.

Regarding the synthetic dataset, we evaluate models performance by computing the median of the c.d.f. for the reconstruction NMSE (Figure 3). A-DLISTA's adaptivity appears to offer an advantage over other models. However, concerning VLISTA, we observe a drop in performance. Such a behaviour is consistent across experiments and can be attributed to a few factors. One possible reason for the drop in performance is the noise introduced during training due to the random sampling procedure used to generate the dictionary. Additionally, the amortization gap that affects all models based on amortized variational inference (Cremer et al., 2018) can also contribute to this effect. Despite this, VLISTA still performs comparably to BCS.

Lastly, we note that ALISTA and NALISTA do not perform as well as other models. This is likely due to the optimization procedure these two models require to evaluate the weight matrix W. The computation of the W matrix requires a fixed Figure 3: NMSE's median. The y-axes is in dB (the lower the better) for a different number of measurements (x-axes).

5.2 Synthetic Dataset

sensing matrix, a condition not satisfied in the current setup. Regarding non-static measurements, we averaged across multiple Φi, thus obtaining a non-optimal W matrix. To support our hypothesis, we report in Appendix F results considering a static measurement scenario for which ALISTA and NALISTA report very high performance.

5.3 Image Reconstruction - Mnist & Cifar10

When evaluating different models on the MNIST and CIFAR10 datasets, we use the Structural Similarity Index Measure (SSIM) to measure their performance. As for the synthetic dataset, we experience strong instabilities in ALISTA and NALISTA training due to non-static measurement setup. Therefore, we do not provide results for these models. It is important to note that the poor performance of ALISTA and NALISTA is a result of our specific experiment setup, which differs from the static case considered in the formulation of these models. We refer to Appendix F for results using a static measurements scenario. By looking at the results in Table 1 and Table 2, we can draw similar conclusions as for the synthetic dataset. Additionally, we report results from three classical baselines (subsection 5.1). Among non-Bayesian models, A-DLISTA shows the best results. Furthermore, by comparing A-DLISTA with its non-augmented version, DLISTA, one can notice the benefits of using an augmentation network to make the model adaptive. Concerning Bayesian approaches, VLISTA outperforms BCS. However, it is important to note that BC does not have trainable parameters, unlike VLISTA. Therefore, the higher performance of VLISTA comes at the price of an expensive training procedure. Similar to the synthetic dataset, VLISTA exhibits a drop in performance compared to A-DLISTA for MNIST and CIFAR10.

Table 1: MNIST SSIM (the higher the better) for different number of measurements. First three rows correspond to "classical" baselines. We highlight in bold the best performance for Bayes and Non-Bayes models.

SSIM ↑
number of measurements
1 10 100 300 500
(×e −1 ) (×e −1 ) (×e −1 ) (×e −1 ) (×e −1 )
Non-Bayes Canonical 0.39±0.12 0.56±0.04 2.20±0.04 3.75±0.05 4.94±0.06
Wavelet 0.40±0.09 0.56±0.06 2.30±0.06 3.90±0.05 5.05±0.01
SPCA 0.45±0.11 0.65±0.06 2.72±0.06 3.52±0.08 4.98±0.08
LISTA 0.96±0.01 1.11±0.01 3.70±0.01 5.36±0.01 6.31±0.01
DLISTA 0.96±0.01 1.09±0.01 4.01±0.02 5.57±0.01 6.26±0.01
A-DLISTA (our) 0.96±0.01 1.17±0.01 4.79±0.01 6.15±0.01 6.70±0.01
Bayes BCS 0.05±0.01 0.60±0.01 1.10±0.01 4.48±0.02 6.23±0.02
VLISTA (our) 0.80±0.03 0.94±0.02 3.29±0.01 4.73±0.01 6.02±0.01

5.4 Out Of Distribution Detection

This section focuses on a crucial distinction between non-Bayesian models and VLISTA for solving inverse linear problems. Unlike any non-Bayesian approach to compressed sensing, VLISTA allows quantifying uncertainties on the reconstructed signals. This means that it can detect out-of-distribution samples without requiring ground truth data during inference. In contrast to other Bayesian techniques that design specific priors to meet the sparsity constraints after marginalization (Ji et al., 2008; Zhou et al., 2014), VLISTA completely overcomes such an issue as the thresholding operations are not affected by the marginalization over dictionaries. To prove that VLISTA can detect OOD samples, we employ the MNIST dataset. First, we split the dataset into two distinct subsets - the In-Distribution (ID) set and the OOD. The ID set comprises images from three randomly chosen digits, while the OOD set includes images of the remaining digits. Then, we partitioned the ID set into training, validation, and test sets for VLISTA. Once the model was trained, Table 2: CIFAR10 SSIM (the higher the better) for different number of measurements. First three rows correspond to "classical" baselines. We highlight in bold the best performance for Bayes and Non-Bayes models.

SSIM ↑
number of measurements
1 10 100 300 500
(×e −1 ) (×e −1 ) (×e −1 ) (×e −1 ) (×e −1 )
Non-Bayes Canonical 0.17±0.10 0.21±0.02 0.33±0.02 0.47±0.02 0.58±0.03
Wavelet 0.23±0.22 0.42±0.02 1.44±0.06 2.52±0.09 3.43±0.08
SPCA 0.31±0.19 0.43±0.02 1.53±0.04 2.66±0.08 3.58±0.07
LISTA 1.34±0.02 1.67±0.02 3.10±0.01 4.20±0.01 4.71±0.01
DLISTA 1.16±0.02 1.96±0.02 4.50±0.01 5.15±0.01 5.42±0.01
A-DLISTA (our) 1.34±0.02 1.77±0.02 4.74±0.01 5.26±0.01 5.83±0.01
Bayes BCS 0.04±0.01 0.48±0.01 0.59±0.01 1.29±0.01 1.91±0.01
VLISTA (our) 0.86±0.03 1.25±0.03 3.59±0.02 4.01±0.01 4.36±0.01

it was tasked with reconstructing images from the ID test and OOD sets. To assess the model's ability to detect OODs, we utilized a two-sample t-test. We accomplished that by leveraging the per-pixel variance of the reconstructed ID, {varIDtest;i σpp} P −1 i=0 , and OOD, {varOOD;i σpp} P −1 i=0 , images (with P being the number of pixels). To compute the per-pixel variance, we reconstruct each image 100 times by sampling a different dictionary for each of trial. We then construct the empirical c.d.f. of the per-pixel variance for each image.

By using the mean of the c.d.f. as a summary statistics, we can apply the two-sample t-test to detect OOD samples. We report the results in Figure 4. As a reference p-value for rejecting the null hypothesis about the two variance distributions being the same, we consider a significance level equal to 0.05 (green solid line).

We conducted multiple tests at different noise levels to assess the robustness of OOD detection to measure noise. For the current task, we used BCS as a baseline. However, due to the different nature of the BCS framework, we utilized a slightly different evaluation procedure to determine its p-values. We employed the same ID and OOD splits as VLISTA but considered the c.d.f. of the reconstruction error the model evaluates.

The remainder of the process was identical to that of VLISTA.

6 Conclusion

Our study introduces a novel approach called VLISTA, which combines dictionary learning and sparse recovery into a single variational framework. Traditional compressed sensing methods rely on a known ground truth dictionary to reconstruct signals. Moreover state-ofthe-art LISTA-type of models, typically assume a fixed measurement setup. In our work, we relax both assumptions. First, we propose a soft-thresholding algorithm, termed A-DLISTA, that can handle different sensing matrices. We theoretically justify the use of an augmentation network to adapt the threshold and step size for each layer based on the current input and the learned dictionary. Finally, we propose a probabilistic assumption about the existence of a ground truth dictionary and use it to create

10_image_0.png

Figure 4: p-value for OOD rejection as a function of the noise level. The green line represents a reference p-value equal to 0.05. the VLISTA framework. Our empirical results show that A-DLISTA improves upon performances of classical and ML baselines in a non-static measurement scenario. Although VLISTA does not outperform A-DLISTA, it allows for uncertainty evaluation in reconstructed signals, a valuable feature for detecting out-of-distribution data. In contrast, none of the non-Bayesian models can perform such a task. Unlike other Bayesian approaches, VLISTA does not require specific priors to preserve sparsity after marginalization. Instead, the averaging operation applies to the sparsifying dictionary, not the sparse signal itself.

Impact Statement

This work proposes two new models to jointly solve the dictionary learning and sparse recovery problems, especially concerning scenarios characterized by a varying sensing matrix. We believe the potential societal consequences of our work being chiefly positive, since it might contribute to a larger adoption of LISTA-type of models to applications requiring fast solutions to underdetermined inverse problems, especially concerning varying forward operators. Nonetheless, it is crucial to exercise caution and thoroughly comprehend the behavior of A-DLISTA and VLISTA, as with any other LISTA model, in order to obtain reliable predictions.

References

Aviad Aberdam, Alona Golts, and Michael Elad. Ada-LISTA: Learned Solvers Adaptive to Varying Models.

IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021.

Tim Bakker, Herke van Hoof, and Max Welling. Experimental design for mri by greedy policy search.

Advances in Neural Information Processing Systems, 33:18954–18966, 2020.

Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.

SIAM journal on imaging sciences, 2(1):183–202, 2009.

Arash Behboodi, Holger Rauhut, and Ekkehard Schnoor. Compressive Sensing and Neural Networks from a Statistical Learning Perspective. In Gitta Kutyniok, Holger Rauhut, and Robert J. Kunsch (eds.), Compressed Sensing in Information Processing, Applied and Numerical Harmonic Analysis, pp. 247–277.

Springer International Publishing, Cham, 2022.

Freya Behrens, Jonathan Sauder, and Peter Jung. Neurally Augmented ALISTA. In International Conference on Learning Representations, 2021.

Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265–274, November 2009.

Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning, pp. 1613–1622. PMLR, 2015.

Mark Borgerding, Philip Schniter, and Sundeep Rangan. AMP-inspired deep networks for sparse linear inverse problems. IEEE Transactions on Signal Processing, 65(16):4293–4308, 2017. Publisher: IEEE.

Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. Advances in Neural Information Processing Systems, 31, 2018.

Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. Hyperparameter tuning is all you need for lista.

Advances in Neural Information Processing Systems, 34, 2021.

Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. Advances in neural information processing systems, 28, 2015.

Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.

Chris Cremer, Xuechen Li, and David Duvenaud. Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, pp. 1078–1086. PMLR, 2018.

Ingrid Daubechies, Michel Defrise, and Christine De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 57(11):1413–1457, 2004.

Geoffrey M. Davis, Stephane G. Mallat, and Zhifeng Zhang. Adaptive time-frequency decompositions. Optical Engineering, 33(7):2183–2191, July 1994.

David L. Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed sensing.

Proceedings of the National Academy of Sciences, 106(45):18914–18919, November 2009.

Meire Fortunato, Charles Blundell, and Oriol Vinyals. Bayesian Recurrent Neural Networks. arXiv:1704.02798 [cs, stat], May 2019. URL http://arxiv.org/abs/1704.02798. arXiv: 1704.02798.

Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Springer New York, New York, NY, 2013.

Raja Giryes, Yonina C. Eldar, Alex M. Bronstein, and Guillermo Sapiro. Tradeoffs between convergence speed and reconstruction accuracy in inverse problems. IEEE Transactions on Signal Processing, 66(7): 1676–1690, 2018. Publisher: IEEE.

Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In 27th International Conference on Machine Learning, ICML 2010, 2010.

Shihao Ji, Ya Xue, and Lawrence Carin. Bayesian Compressive Sensing. IEEE Transactions on Signal Processing, 56(6):2346–2356, June 2008. Conference Name: IEEE Transactions on Signal Processing.

Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.

Rahul Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017.

Jialin Liu and Xiaohan Chen. Alista: Analytic weights are as good as learned weights in lista. In International Conference on Learning Representations (ICLR), 2019.

Jialin Liu, Xiaohan Chen, Zhangyang Wang, and Wotao Yin. ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA. In International Conference on Learning Representations, 2019.

Chris Metzler, Ali Mousavi, and Richard Baraniuk. Learned D-AMP: Principled Neural Network based Compressive Image Recovery. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 1770–1781. Curran Associates, Inc., 2017.

Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, pp. 40–44 vol.1, November 1993. doi: 10.1109/ACSSC.1993.342465.

Hamed Pezeshki, Fabio Valerio Massoli, Arash Behboodi, Taesang Yoo, Arumugam Kannan, Mahmoud Taherzadeh Boroujeni, Qiaoyu Li, Tao Luo, and Joseph B Soriaga. Beyond codebook-based analog beamforming at mmwave: Compressed sensing and machine learning methods. In GLOBECOM 2022-2022 IEEE Global Communications Conference, pp. 776–781. IEEE, 2022.

Wei Pu, Yonina C. Eldar, and Miguel R. D. Rodrigues. Optimization Guarantees for ISTA and ADMM Based Unfolded Networks. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8687–8691, May 2022.

Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International conference on machine learning, pp. 1530–1538. PMLR, 2015.

Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pp. 1278–1286.

PMLR, 2014.

Javier Rodríguez-Fernández, Nuria González-Prelcic, Kiran Venugopal, and Robert W Heath. Frequencydomain compressive channel estimation for frequency-selective hybrid millimeter wave mimo systems. IEEE Transactions on Wireless Communications, 17(5):2946–2960, 2018.

Ekkehard Schnoor, Arash Behboodi, and Holger Rauhut. Generalization Error Bounds for Iterative Recovery Algorithms Unfolded as Neural Networks. arXiv:2112.04364, January 2022.

P. Sprechmann, A. M. Bronstein, and G. Sapiro. Learning Efficient Sparse and Low Rank Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1821–1833, September 2015. ISSN 0162-8828.

Michael E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun):211–244, 2001.

Kailun Wu, Yiwen Guo, Ziang Li, and Changshui Zhang. Sparse coding with gated learned ista. In International Conference on Learning Representations, 2020.

yan yang, Jian Sun, Huibin Li, and Zongben Xu. Deep ADMM-Net for Compressive Sensing MRI. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.

Tianwei Yin, Zihui Wu, He Sun, Adrian V Dalca, Yisong Yue, and Katherine L Bouman. End-to-end sequential sampling and reconstruction for mri. arXiv preprint arXiv:2105.06460, 2021.

Jian Zhang and Bernard Ghanem. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.

1828–1837, 2018.

Mingyuan Zhou, Haojun Chen, Lu Ren, Guillermo Sapiro, Lawrence Carin, and John Paisley. Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations. In Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009.

Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, Lingbo Li, Zhengming Xing, David Dunson, Guillermo Sapiro, and Lawrence Carin. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images. IEEE Transactions on Image Processing, 21(1):130–144, January 2012.

Zhou Zhou, Kaihui Liu, and Jun Fang. Bayesian compressive sensing using normal product priors. IEEE Signal Processing Letters, 22(5):583–587, 2014.

A Appendix B Theoretical Analysis

In this section, we report about theoretical motivations for the A-DLISTA design choices. The design is motivated by considering the convergence analysis of LISTA method. We start by recalling a result from Chen et al. (2018), upon which our analysis relies. The authors of Chen et al. (2018) consider the inverse problem y = Ax∗, with x∗ as the ground truth sparse vector, and use the model with each layer given by:

xt=ηθt(xt1+WtT(yAxt1)),\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\mathbf{W}_{t}^{\mathsf{T}}(\mathbf{y}-\mathbf{A}\mathbf{x}_{t-1})\right), (16)(16) (y − Axt−1), (16) where (Wt, θt) are learnable parameters.

The following result from Chen et al. (2018, Theorem 2) is adapted to noiseless setting.

Theorem B.1. Suppose that the iterations of LISTA are given by equation 16, and assume ∥x∗∥∞ ≤ B and ∥x∗∥0 ≤ s. There exists a sequence of parameters {Wt, θt} such that ∥xt − x∗∥2 ≤ sB exp(−ct), ∀t = 1, 2*, . . . ,* for constant c > 0 that depend only on the sensing matrix A and the sparsity level s. It is important to note that the above convergence result only assures the existence of the parameters that are good for convergence but does not guarantee that the training would necessarily find them. The latter result is in general difficult to obtain.

The proof in Chen et al. (2018) has two main steps:

  1. No false positive: the thresholds are chosen such that the entries outside the support of x∗ remain zero. The choice of threshold, among others, depend on the coherence value of Wt and A. We will provide more details below.

  2. Error bounds for x∗: assuming proper choice of thresholds, the authors derive bounds on the recovery error.

We focus on adapting these steps to our setup. Note that to assure there is no false positive, it is common in classical ISTA literature to start from large thresholds, so the soft thresholding function aggressively maps many entries to zero, and then gradually reduce the threshold value as the iterations progress.

B.1 Analysis With Known Ground-Truth Dictionary

Let's consider the extension of Theorem B.1 to our setup:

$\downarrow$)] . xt = ηθt xt−1 + γt(ΦkΨt) T(y k − ΦkΨtxt−1). (17) Note that in our case, the weight Wt is replaced with γt(ΦkΨt) with learnable Ψt and γt. Besides, the matrix A is replaced by ΦkΨt, and the forward model is given by y k = ΦkΨox∗. The sensing matrix Φk can change across samples, hence the dependence on the sample index k.

If the learned dictionary Ψt is equal to Ψo, the layers of our model are equal to classical iterative softthresholding algorithms with learnable step-size γt and threshold θt.

There are many convergence results in the literature, for example see Daubechies et al. (2004). We can use convergence analysis of iterative soft thresholding algorithms using the mutual coherence similar to Chen et al. (2018); Behrens et al. (2021). As a reminder, the mutual coherence of the matrix M is defined as:

μ(M):=max1ijNMiMj,\mu(M):=\operatorname*{max}_{1\leq i\neq j\leq N}\left|M_{i}^{\top}M_{j}\right|, , (18) where Miis the i'th column of M.

(17)(17) $\left(18\right)$. The convergence result requires that the mutual coherence µ(ΦkΨo) be sufficiently small, for example in order of 1/(2s) with s the sparsity, and the matrix ΦkΨo is column normalized, i.e.,(ΦkΨo)2 = 1. Then the step size can be chosen equal to one, i.e., γt = 1. The thresholds θt are chosen to avoid false positive using a similar schedule mentioned above, that is, first starting with a large threshold θ0 and then gradually decreasing it to a certain limit. We do not repeat the derivations, and interested readers can refer to Daubechies et al. (2004); Behrens et al. (2021); Chen et al. (2018) and references therein.

Remark B.2. When the dictionary Ψo is known, we can adapt the algorithm to the varying sensing matrix Φk by first normalizing the column ΦkΨo. What is important to note is that the threshold choice is a function the mutual coherence of the sensing matrix. So with each new sensing matrix, the thresholds should be adapted following the mutual coherence value. This observation partially justifies the choice of thresholds as a function of the dictionary and the sensing matrix, hence the augmentation network.

B.2 Analysis With Unknown Dictionary

We now move to the scenario where the dictionary is itself learned, and not known in advance.

Consider the layer t of DLISTA with the sensing matrix Φk, and define the following parameters:

μ~(t,Φk):=max1ijN((ΦkΨt)i)(ΦkΨt)j\tilde{\mu}(t,\mathbf{\Phi}^{k}):=\max_{1\leq i\neq j\leq N}\left|((\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})_{j}\right| $$\tilde{\mu}{2}(t,\mathbf{\Phi}^{k}):=\max{1\leq i,j\leq N}\left|((\mathbf{\Phi}^{k}\mathbf{\Psi}{t}){i})^{\top}(\mathbf{\Phi}^{k}(\mathbf{\Psi}{t}-\mathbf{\Psi}{o})){j}\right|$$ $$\delta(\gamma,t,\mathbf{\Phi}^{k}):=\max{i}\left|1-\gamma\left||(\mathbf{\Phi}^{k}\mathbf{\Psi}{t}){i}\right|\right|_{2}^{2}\right|$$ (19) $\binom{20}{2}$ . (21)(21)

Some comments are in order:

  • The term µ˜ is the mutual coherence of the matrix ΦkΨt.

  • The term µ˜2 is closely connected to generalized mutual coherence, however, it differs in that unlike generalized mutual coherence, it includes the diagonal inner product for i = j. It captures the effect of mismatch with ground-truth dictionary.

  • Finally, the term δ(·) is reminiscent of restricted isometry property (RIP) constant (Foucart & Rauhut, 2013), a key condition for many recovery guarantees in compressed sensing. When the columns of the matrix ΦkΨt is normalized, the choice of γ = 1 yield δ(γ, t, Φk) = 0.

For the rest of the paper, for simplicity, we only kept the dependence on γ in the notation and dropped the dependence of µ, ˜ µ˜2 and δ on t, Φk and Ψt.

Proposition B.3. Suppose that y k = ΦkΨox∗ with the support supp(x∗) = S. For DLISTA iterations give as

xt=ηθt(xt1+γt(ΦkΨt)T(ykΦkΨtxt1)),(1)\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})^{T}(\mathbf{y}^{k}-\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})\right),\tag{1}

we have:

  1. If for all t, the pairs (θt, γt, Ψt) satisfy

(22)(22) γt(μ~xxt11+μ~2x1)θt,\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t}, ) ≤ θt, (23) then there is no false positive in each iteration. In other words, for all t, we have supp(xt) ⊆ supp(x∗).

  1. Assuming that the conditions of the last step hold, then we get the following bound on the error:

$|\mathbf{x}{t}-\mathbf{x}{}|{1}\leq\left(\delta(\gamma{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)|\mathbf{x}{t-1}-\mathbf{x}{}|{1}+\gamma{t}\tilde{\mu}{2}|S|,|\mathbf{x}{*}|{1}+|S|\theta{t}$.
(23)(23)

B.2.1 Guidelines From Proposition.

We remark on some of the guidelines we can get from the above result.

  • Thresholds. Similar to the discussion in previous sections, there are thresholds such that, there is no false positive at each layer. The choice of θt is a function of γt and, through coherence terms, Φk and Ψt. Since Φkchanges for each sample k, we learn a neural network that yields this parameter as a function of Φk and Ψt.

  • Step size. The step size γt can be chosen to control the error decay. Ideally, we would like to have the term (δ(γt) + γtµ˜(|S| − 1)) to be strictly smaller than one. In particular, γt directly impacts δ(γt), also a function of Φk and Ψt. We can therefore consider γt as a function of Φk and Ψt, which hints at the augmentation neural network we introduced for giving γt as a function of those parameters.

Remarks on Convergence. One might wonder if the convergence is possible given the bound on the error.

We try to sketch a scenario where this can happen. First, note that once we have chosen γt, and given Φk and Ψt, we can select θt using the condition 23. Also, if the network gradually learns the ground truth dictionary at later stages, the term µ˜2 vanishes. We need to choose the term γt carefully such that the term (δ(γt) + γtµ˜(|S| − 1)) is smaller than one. Similar to ISTA analysis, we would need to assume bounds on the mutual coherence µ˜ and the column norm for ΦkΨo. With standard assumptions, sketched above as well, the error gradually decreases per iteration, and we can reuse the convergence results of ISTA. We would like to emphasize that this is a heuristic argument, and there is no guarantee that the training yields a model with the parameters in accordance with these guidelines. Although we show experimentally that the proposed methods provide the promised improvements.

B.3 Proof Of Proposition B.3

In what follows, we provide the derivations for Proposition B.3.

Convergence proofs of ISTA type models involve two steps in general. First, it is investigated how the support is found and locked in, and second how the error shrinks at each step. We focus on these two steps, which matter mainly for our architecture design. Our analysis is similar in nature to Chen et al. (2018); Aberdam et al. (2021), however it differs from Aberdam et al. (2021) in considering unknown dictionaries and from Chen et al. (2018) in both considered architecture and varying sensing matrix. In what follows, we consider noiseless setting. However, the results can be extended to noisy setups by adding additional terms containing noise norm similar to Chen et al. (2018). We make following assumptions:

  1. There is a ground-truth (unknown) dictionary Ψo such that s∗ = Ψox∗.

  2. As a consequence, y k = ΦkΨox∗.

  3. We assume that x∗ is sparse with its support contained in S. In other words: xi,∗ = 0 for i /∈ S.

To simplify the notation, we drop the index k, which indicates the varying sensing matrix, from Φk and y k, and use Φ and y for the rest. We break the proof to two lemma, each proving one part of Proposition B.3.

B.3.1 Proof - Step 1: No False Positive Condition

The following lemma focuses on assuring that we do not have false positive in support recovery after each iteration of our model. In other words, the models continues updating only the entries in the support and keep the entries outside the support zero.

Lemma B.4. Suppose that the support of x∗ is given as supp(x∗) = S*. Consider iterations given by:* xt = ηθt xt−1 + γt(ΦΨt) ⊤(y − ΦΨtxt−1), with x0 = 0*. If we have for all* t = 1, 2*, . . .* :

γt(μ~xxt11+μ~2x1)θt,\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t}, (24)(24)

then there will be no false positive, i.e., xt,i = 0 for ∀i /∈ S, ∀t.

Proof. We prove this by induction. Since x0 = 0, the induction base is trivial. Suppose that the support of xt−1 is already included in that of x∗, namely supp(xt−1) ⊆ supp(x∗) = S. Consider i ∈ S c. We have

xt,i=ηθt(γt((ΦΨt)i)(yΦΨtxt1)).(1)x_{t,i}=\eta_{\theta_{t}}\left(\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(\mathbf{y}-\Phi\Psi_{t}\mathbf{x}_{t-1})\right).\tag{1}

To avoid false positives, we need to guarantee that for i /∈ S:

ηθt(γt((ΦΨt)i)(yΦΨtxt1))=0    γt((ΦΨt)i)(yΦΨtxt1)θt,\eta_{\theta_{t}}\left(\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(y-\Phi\Psi_{t}x_{t-1})\right)=0\implies\left|\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(y-\Phi\Psi_{t}x_{t-1})\right|\leq\theta_{t}, which means that the soft-thresholding function will have zero output for these entries. First note that: ((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtxt−1) ≤ ((ΦΨt)i) ⊤Φ(Ψtx∗ − Ψtxt−1) + ((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtx∗) (26) = X j∈S ((ΦΨt)i) ⊤(ΦΨt)j (x∗,j − xt−1,j ) + ((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtx∗) (27) We can bound the first term by:

jS((ΦΨt)i)(ΦΨt)j(x,jxt1,j)jS((ΦΨt)i)(ΦΨt)j(x,jxt1,j)\left|\sum_{j\in S}((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}\mathbf{\Psi}_{t})_{j}(\mathbf{x}_{*,j}-\mathbf{x}_{t-1,j})\right|\leq\sum_{j\in S}\left|((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}\mathbf{\Psi}_{t})_{j}\right|\left|(\mathbf{x}_{*,j}-\mathbf{x}_{t-1,j})\right| $$\leq\hat{\mu}\left|\mathbf{x}{*}-\mathbf{x}{t-1}\right|_{1},$$ (25)(25) (26)(26) (27)(27) (28) $\binom{29}{2}$ (30) . where we use the definition of mutual coherence for the upper bound. The last term is bounded by

((ΦΨt)i)Φ(ΨoxΨtx)=jS((ΦΨt)i)(Φ(ΨoΨt))jxj,\left|((\Phi\Psi_{t})_{i})^{\top}\Phi(\Psi_{o}\mathbf{x}_{*}-\Psi_{t}\mathbf{x}_{*})\right|=\left|\sum_{j\in S}((\Phi\Psi_{t})_{i})^{\top}(\Phi(\Psi_{o}-\Psi_{t}))_{j}x_{j,*}\right| $$\leq\sum_{j\in S}\left|((\Phi\Psi_{t}){i})^{\top}(\Phi(\Psi{o}-\Psi_{t})){j}\right|\left|x{j,}\right|$$ $$\leq\hat{\mu}{2}\left|\mathbf{x}{}\right|{1}.$$ (31)(31) $\square$ Therefore, we get $\gamma{t}((\Phi\Psi_{t}){i})^{\top}(\mathbf{y}-\Phi\Psi{t}\mathbf{x}{t-1})\big{|}\leq\gamma{t}\left(\tilde{\mu}\left|\mathbf{x}{*}-\mathbf{x}{t-1}\right|{1}+\tilde{\mu}{2}\left|\mathbf{x}{*}\right|{1}\right)$.
The following choice guarantees that there is no false positive: γt(μ~xxt11+μ~2x1)θt.\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t}. ) ≤ θt. (31)

B.3.2 Proof - Step 2: Controlling The Recovery Error

The previous lemma provided the conditions such that there is no false positive. We see under which conditions the model can reduce the error inside the support S.

Lemma B.5. Suppose that the threshold parameter θt has been chosen such that there is no false positive after each iteration. We have:

$|\mathbf{x}{t}-\mathbf{x}{}|{1}\leq\left(\delta(\gamma{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)|\mathbf{x}{t-1}-\mathbf{x}{}|{1}+\gamma{t}\tilde{\mu}{2}|S|,|\mathbf{x}{*}|{1}+|S|\theta{t}$.
Proof. For i ∈ S, we have:

xt,ix,ixt1,i+γi((ΦΨt)i)T(yΦΨtxt1)x,i+θi.(32)|x_{t,i}-x_{*,i}|\leq\left|x_{t-1,i}+\gamma_{i}((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\mathsf{T}}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})-x_{*,i}\right|+\theta_{i}.\tag{32} At the iteration $t$ for $i\in S$, we can separate the dictionary mismatch and the rest of the error as follows: $x_{t-1,i}+\gamma_{t}((\mathbf{\Phi}\mathbf{\Psi}{t}){i})^{\top}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}{t}\mathbf{x}{t-1})=$ $x_{t-1,i}+\gamma_{t}((\mathbf{\Phi}\mathbf{\Psi}{t}){i})^{\top}(\mathbf{\Phi}\mathbf{\Psi}{t}){j}(\mathbf{x}{*,j}-\mathbf{x}{t-1,j})+((\mathbf{\Phi}\mathbf{\Psi}{t}){i})^{\top}\mathbf{\Phi}(\mathbf{\Psi}{o}\mathbf{x}{}-\mathbf{\Psi}{t}\mathbf{x}{}))$.
We can decompose the first part further as:

The first part of the $\mathbf{x}$ is: $$\begin{aligned} x_{t-1,i} & + \gamma_t \sum_{j \in S} ((\mathbf{\Phi} \mathbf{\Psi}t)i)^\top (\mathbf{\Phi} \mathbf{\Psi}t)j (x{*,j} - \mathbf{x}{t-1,j}) = \ & (\mathbf{I} - \gamma_t (\mathbf{\Phi} \mathbf{\Psi}t)i)^\top (\mathbf{\Phi} \mathbf{\Psi}t)i) \mathbf{x}{t-1,i} + \gamma_t (\mathbf{\Phi} \mathbf{\Psi}t)i)^\top (\mathbf{\Phi} \mathbf{\Psi}t)i) \mathbf{x}{*,i} \ & + \gamma_t \sum{j \in S,j \neq i} ((\mathbf{\Phi} \mathbf{\Psi}t)i)^\top (\mathbf{\Phi} \mathbf{\Psi}t)j (\mathbf{x}{*,j} - \mathbf{x}{t-1,j}). \end{aligned}$$ j∈S,j̸=i Using triangle inequality for the previous decomposition we get: xt−1,i + γt((ΦΨt)i) ⊤(y − ΦΨtxt−1) − x∗,i ≤ (1 − γt(ΦΨt)i) ⊤(ΦΨt)i))(xt−1,i − x∗,i) + γt X j∈S,j̸=i ((ΦΨt)i) ⊤(ΦΨt)j (x∗,j − xt−1,j ) + γt ((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtx∗)) ≤ δ(γt)|(zt−1,i − z∗,i)| + γtX j∈S,j̸=i µ˜ |x∗,j − xt−1,j | + γtµ˜2 ∥x∗∥1 It suffices to sum up the errors and combine previous inequalities to get: $\text{abine previous inequalities to get:}$ . \square xS,tx1=iSxt,ix,i\left\|\mathbf{x}_{S,t}-\mathbf{x}_{*}\right\|_{1}=\sum_{i\in S}\left|\mathbf{x}_{t,i}-\mathbf{x}_{*,i}\right|\leq $$\leq\left(\delta(\gamma{t})+\gamma{t}\tilde{\mu}(|S|-1)\right)\left|\mathbf{x}{S,t-1}-\mathbf{x}{*}\right|{1}+\gamma{t}\tilde{\mu}{2}|S|\left|\mathbf{x}{*}\right|{1}+|S|\theta_{t}.$$ Since we assumed there is no false positive, we get the final result: $$|\mathbf{x}{t}-\mathbf{x}{}|{1}=\sum{i\in S}|\mathbf{x}{t,i}-\mathbf{x}{,i}|\leq(\delta(\gamma_{t})+\gamma_{t}\hat{\mu}(|S|-1)),|\mathbf{x}{t-1}-\mathbf{x}{}|{1}+\gamma{t}\hat{\mu}{2}|S|,|\mathbf{x}{}|{1}+|S|\theta{t}.$$

C Implementation Details

In this section we report details concerning the architecture of A-DLISTA and VLISTA.

C.1 A-Dlista (Augmentation Network)

As previously stated in the main paper (subsection 4.1), A-DLISTA consists of two architectures: the DLISTA model (blue blocks in Figure 1) representing the unfolded version of the ISTA algorithm with parametrized Ψ, and the augmentation (or adaptation) network (red blocks in Figure 1). At a given reconstruction layer t, the augmentation model takes the measurement matrix Φi and the dictionary Ψt as input and generates the parameters {γt, θt} for the current iteration. The architecture for the augmentation network is illustrated in Figure 5, which shows a feature extraction section and two output branches, one for each generated parameter. To ensure that the estimated {γt, θt} parameters are positive, each branch is equipped with a softplus function. As noted in the main paper, the weights of the augmentation model are shared across all A-DLISTA layers.

19_image_0.png

Figure 5: Augmentation model's architecture for A-DLISTA.

20_image_0.png

20_image_1.png

Figure 6: Left: prior network architecture. Right: posterior network architecture. For the posterior model,

20_image_2.png

we show the output shape from each of the three input heads in the figure. Such a structure is necessary since the posterior model accepts three quantities as input: observations, the sensing matrix, and the reconstruction from the previous layer. Different shapes characterize these quantities. The letter "B" indicates batch size.

C.2 Vlista

As described in subsection 4.2 of the meain paper, VLISTA comprises three different components: the likelihood, and the prior and posterior models.

C.2.1 Vlista - Likelihood Model

The likelihood model (subsection 4.2) represents a Gaussian distribution with a mean value parametrized using the A-DLISTA model. There is. However, a fundamental difference between the likelihood model and the A-DLISTA architecture is presented in subsection 4.1. Indeed, unlike the latter, the likelihood model of VLISTA does not learn the dictionary using backpropagation. Instead, it uses the dictionary sampled from the posterior distribution.

C.2.2 Vlista - Posterior & Prior Models

We report in Figure 6 the prior (left image) and the posterior (right image) architectures. We implement both models using an encoder-decoder scheme based on convolutional layers. The prior network comprises two convolutional layers followed by two separate branches dedicated to generating the mean and variance of the Gaussian distribution subsection 4.2. We use the dictionary sampled at the previous iteration as input for the prior. In contrast to the prior, the posterior network accepts three different quantities as input: the sensing matrix, the observations, and the reconstructed sparse vector from the previous iteration. To process the three inputs together, the posterior accounts for three separated "input" layers followed by an aggregation step. Subsequently, two branches are used to generate the mean and the standard deviation of the Gaussian distribution of the dictionary subsection 4.2.

We offer the reader a unified overview of our variational model in Figure 7.

21_image_0.png

Figure 7: VLISTA iterations' schematic view.

D Training Details

This section provides details regarding the training of the A-DLISTA and VLISTA models. Using the Adam optimizer, we train the reconstruction and augmentation models for A-DLISTA jointly. We set the initial learning rate to 1.e−2 and 1.e−3for the reconstruction and augmentation network, respectively, and we drop its value by a factor of 10 every time the loss stops improving. Additionally, we set the weight decay to 5.e−4 and the batch size to 128. We use Mean Squared Error (MSE) as the objective function for all datasets. We train all the components of VLISTA using the Adam optimizer, which is similar to A-DLISTA. We set the learning rate to 1.e−3 and drop its value by a factor of 10 every time the loss stops improving. Regarding the objective function, we maximize the ELBO and set the weight for KL divergence to 1.e−3. We report in Equation 33 the ELBO derivation.

logp(x1:T = x i gt|y i, Φi) = log Zp(x1:T = x i gt|Ψ1:T , y i, Φi)p(Ψ1:T )dΨ1:T (33) (33)(33) = log Zp(x1:T = x i gt|Ψ1:T , y i, Φi)p(Ψ1:T )q(Ψ1:T |x1:T , y i, Φi) q(Ψ1:T |x1:T , yi, Φi)dΨ1:T ≥ Zq(Ψ1:T |x1:T , y i, Φi) log p(x1:T = x i gt|Ψ1:T , y i, Φi)p(Ψ1:T ) q(Ψ1:T |x1:T , yi, Φi)dΨ1:T

Zq(Ψ1:T |x1:T , y i, Φi) log p(x1:T = x i gt|Ψ1:T , y i, Φi)dΨ1:T + Zq(Ψ1:T |x1:T , y i, Φi) log p(Ψ1:T ) q(Ψ1:T |x1:T , yi, Φi) dΨ1:T

X T t=1 EΨ1:t∼q(Ψ1:t|x0:t−1,yi,Φi) hlog p(xt = x i gt|Ψ1:t, y i, Φi) i − X T t=2 EΨ1:t−1∼q(Ψ1:t−1|xt−1,yi,Φi) hDKLq(Ψt|xt−1, y i, Φi) ∥ p(Ψt|Ψt−1) i − DKLq(Ψ1|x0) ∥ p(Ψ1)

Note that in Equation 33, we consider the same ground truth, x i gt, for each iteration t ∈ [1, T].

E Computational Complexity

This section provides a complexity analysis of the models utilized in our research. Table 3 displays the number of trainable parameters and average inference time for each model, while Table 4 showcases the MACs count.

To better understand the quantities appearing in Table 4, we have summarized their meaning in Table 5.

The average inference time was estimated by testing over 1000 batches containing 32 data points using a GeForce RTX 2080 Ti.

Trainable Parameters and Average Inference Time. To compute the values in Table 3, we considered the architectures used in the main corpus of the paper, e.g. same number of layers. From Table 3, it's worth noting that although ISTA appears to have the longest inference time, that can be attributed to the cost of computing the spectral norm of the matrix A = ΦΨ. Such an operation, can consume up to 98% of the total inference time. Interestingly, neither NALISTA nor A-DLISTA require the computation of the spectral norm as they dynamically generate the step size. Additionally, LISTA does not require it at all. NALISTA and A-DLISTA have comparable inference times due to the similarity of their operations, whereas LISTA is the fastest model, whilst VLISTA has a higher average inference time given the use of the posterior model and the sampling procedure. Interestingly, LISTA and A-DLISTA have a comparable number of trainable parameters, while NALISTA has significantly fewer. However, it's essential to emphasize that the number of trainable parameters depends on the problem setup, such as the number of measurements and atoms.

We use the same experimental setup described in the main paper, which includes 500 measurements, 1024 atoms, and three layers for each ML model. As outlined in the main paper, the likelihood model for VLISTA is similar in architecture to A-DLISTA, as reflected in the MACs count shown in Table 4. However, the likelihood model of VLISTA has a different number of trainable parameters compared to A-DLISTA. Such a dufference is due to VLISTA sampling its dictionary from the posterior rather than training it like A-DLISTA.

Despite this difference, the time required for the likelihood model (shown in Table 3) is comparable to that of A-DLISTA. It's important to note that the inference time for the likelihood is reported "per iteration", so we must multiply it by the number of layers A-DLISTA uses to make a fair comparison.

Macs count. Our attention now turns to the MACs count for the A-DLISTA augmentation network. As shown in Table 4, the count is upper bounded by HWK2 + BP. Note that the height and width of the input are halved after each convolutional layer, while the input and output channels are always one, and the kernel size equals three for each layer (see details in Figure 5). To obtain the upper bound for the

Parameters (M) Average Inference Time (ms)
meas. = 10 meas. = 500
ISTA 0.00 54.0±0.6 (norm: 41.5±0.6) (1.55±0.02)e 3 (norm.: 3 ) (1.53±0.02)e
NALISTA1 3.33e −1 5.8±0.2 7.0±0.3
LISTA 3.15 1.1±0.1 1.5±0.3
A-DLISTA2 3.15 (Aug. NN: 3.11e −3 ) 8.2±0.3 9.1±0.5
VLISTA 3.13† 19.7 †
±0.4 21.3 ±0.4
VLISTA Prior Model 1.10
Posterior Model 2.08 3.6 ‡
±0.2 4.1 ±0.2
Likelihood Model 1.05 2.7 ±0.2 3.05‡ ±0.2
1 LSTM hidden size equal to 256; 2 Each layer learns its own dictionary; † Full model at inference - Prior

Table 3: Number of trainable parameters (Millions) and Average inference time (milliseconds) for different models. Concerning the inference time, we report the average value with its error considering 10 and 50 measurements setups.

MACs count, we set H = max i(Hi) = Hinput/2 i and W = max i(Wi) = Winput/2 i, where Hi and Wi are the height and width at the output of the i-th convolutional layer, respectively. With that in mind, we can upper bounds the MACs count for the convolutional part of the network by HWK2. The convonlutional backbone is followed by two linear layers (see details in Figure 5). The first linear layer takes a vector of size B ∈ R Hinput/16×Winput/16 as input and outputs a vector of length P = 25. Finally, this vector is fed into two heads, each generating a scalar. Therefore, the overall upper bound for the MACs count for the augmentation network is O(HWK2 + BP + P) = O(HWK2 + P(B + 1)) = O(HWK2 + BP), with the factor +1 dropped.

Similar reasoning applies to the prior and posterior models of VLISTA, where we estimate the MACs count by multiplying the MACs for the most expensive layers by the total number of layers of the same type.

F Additional Results

In this section we report additional experimental results. In subsection F.1 we report results concerning a fix measurement setup, i.e. Φi → Φ, while in subsection F.2 we show reconstructed images for different classical baselines.

Table 4: MACs count. Concerning VLISTA, we report the MACs count for each of its component: the Prior, the Posterior, and the Likelihood models.

MACs
ISTA O(DM(2L + N) + D2M)
NALISTA O(DM(2L + N) + [4h(d + h) + h 2 ] † )
LISTA O(LD(N + D) + LMN)
A-DLISTA O(LDMN + [HWK2 + BP] † ) i pr(K2 o 2 prC pr + 2T pr) o i 2
VLISTA Prior Model O(3LHprWprC
Posterior Model O(L(D + M + DCi poC po + L poS + 10HpoWpoT po))
Likelihood Model O(LDMN + [HWK2 + BP] † )
† Contribution from the augmentation network.
M Number of measurements
N Dimensionality of dictionary's atoms
D Number of atoms
L Number of layers
h; d Hidden and input size for the LSTM
H; W Height and width of A-DLISTA augmentation network's input
K Kernel size for the Conv layers of A-DLISTA augmentation network
B; P Input and output size of the linear layer of A-DLISTA augmentation network
i o
C po; C po Input and output channels for the "Φ-input" head of the posterior model
L i po; S Posterior model bottleneck input and output sizes
Hpo; Wpo; Height and width of posterior model's transposed convolutions input
Tpo Kernel size of the posterior model's transposed convolutions
Hpr; Wpr Input and output sizes of convolutional (and transposed conv.) layers of the prior model
i o
C pr; C pr Input and output channels of convolutional (and transposed conv.) layers of the prior model
Kpr; Tpr kernel size for convolutions and transpose convolutions of the prior model

Table 5: Description of quantities appearing in Table 4.

F.1 Fixed Sensing Matrix

We provide in Table 6 and Table 7 results considering a fixed measurement scenario, i.e. using a single sensing matrix Φ. Comparing these results to Table 1 and Table 2, we notice the following. To begin with, LISTA and A-DLISTA perform better compared to the set up in which we use a varying sensing matrix (see section 5). We should expect such behaviour given that we simplified the problem by fixing the Φ matrix.

Additionally, as we mentioned in the main paper, ALISTA and NALISTA exhibit high performances (superior to other models when 300 and 500 measurements are considered). Such a result is expected, given that these two models were designed for solving inverse problems in a fixed measurement scenario. Furthermore, the results in Table 6 and Table 7 support our hypothesis that the convergence issues we observe in the varying sensing matrix setup are likely related to the "inner" optimization that ALISTA and NALISTA require to evaluate the "W" matrix.

Table 6: MNIST SSIM (the higher the better) for a different number of measurements with fixed sensing matrix, i.e., Φi → Φ. We highlight in bold the best performance. Note that whenever there is agreement within the error for the best performances, we highlight all of them.

SSIM ↑
number of measurements
1 10 100 300 500
(×e −1 ) (×e −1 ) (×e −1 ) (×e −1 ) (×e −1 )
LISTA 1.34±0.02 3.12±0.02 5.98±0.01 6.74±0.01 6.96±0.01
ALISTA 0.84±0.01 0.94±0.01 1.70±0.01 5.71±0.01 6.65±0.01
NALISTA 0.91±0.01 1.12±0.01 2.46±0.01 7.03±0.01 8.22±0.02
A-DLISTA (our) 1.21±0.02 3.58±0.01 5.66±0.01 6.47±0.01 6.84±0.01
SSIM ↑
number of measurements
1 10 100 300 500
(×e −1 ) (×e −1 ) (×e −1 ) (×e −1 ) (×e −1 )
LISTA 2.52±0.01 3.19±0.01 4.48±0.01 6.29±0.01 6.74±0.01
ALISTA 0.21±0.03 0.54±0.02 0.88±0.01 3.54±0.01 5.52±0.01
NALISTA 1.32±0.02 1.32±0.02 1.06±0.02 4.59±0.01 6.88±0.01
A-DLISTA (our) 2.91±0.02 3.07±0.01 4.26±0.01 5.89±0.01 6.56±0.01

Table 7: CIFAR10 SSIM (the higher the better) for a different number of measurements with fixed sensing matrix, i.e., Φi → Φ. Note that whenever there is agreement within the error for the best performances, we highlight all of them.

F.2 Classical Baselines

We report additional results concerning classical dictionary learning methods tested on the MNIST and CIFAR10 datasets. It is worth noting that classical baselines can reconstruct images with high quality if it is assumed that there are neither computational nor time constraints (although this would correspond to an unrealistic scenario concerning real-world applications). Therefore, while tuning hyperparameters, we consider a number of iterations up to a several thousand.

Figure 8 to Figure 13 showcase examples of reconstructed images for different baselines.

25_image_0.png

Figure 8: Example of reconstructed MNIST images using the canonical basis. Top row: reconstructed

25_image_1.png

images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.

Figure 9: Example of reconstructed MNIST images using the wavelet basis. Top row: reconstructed images.

Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.

26_image_0.png

Figure 10: Example of reconstructed MNIST images using SPCA. Top row: reconstructed images. Bottom

26_image_1.png

row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible. Figure 11: Example of reconstructed CIFAR10 images using the canonical basis. Top row: reconstructed images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.

27_image_0.png

Figure 12: Example of reconstructed CIFAR10 images using the wavelet basis. Top row: reconstructed

27_image_1.png

images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.

Figure 13: Example of reconstructed CIFAR10 images using SPCA. Top row: reconstructed images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.