RedTachyon
commited on
Commit
•
ddf2eb5
1
Parent(s):
b161cf5
Upload folder using huggingface_hub
Browse files- AQk0UsituG/10_image_0.png +3 -0
- AQk0UsituG/19_image_0.png +3 -0
- AQk0UsituG/20_image_0.png +3 -0
- AQk0UsituG/20_image_1.png +3 -0
- AQk0UsituG/20_image_2.png +3 -0
- AQk0UsituG/21_image_0.png +3 -0
- AQk0UsituG/25_image_0.png +3 -0
- AQk0UsituG/25_image_1.png +3 -0
- AQk0UsituG/26_image_0.png +3 -0
- AQk0UsituG/26_image_1.png +3 -0
- AQk0UsituG/27_image_0.png +3 -0
- AQk0UsituG/27_image_1.png +3 -0
- AQk0UsituG/4_image_0.png +3 -0
- AQk0UsituG/7_image_0.png +3 -0
- AQk0UsituG/AQk0UsituG.md +875 -0
- AQk0UsituG/AQk0UsituG_meta.json +25 -0
AQk0UsituG/10_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/19_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/20_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/20_image_1.png
ADDED
Git LFS Details
|
AQk0UsituG/20_image_2.png
ADDED
Git LFS Details
|
AQk0UsituG/21_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/25_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/25_image_1.png
ADDED
Git LFS Details
|
AQk0UsituG/26_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/26_image_1.png
ADDED
Git LFS Details
|
AQk0UsituG/27_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/27_image_1.png
ADDED
Git LFS Details
|
AQk0UsituG/4_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/7_image_0.png
ADDED
Git LFS Details
|
AQk0UsituG/AQk0UsituG.md
ADDED
@@ -0,0 +1,875 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Variational Learning Ista
|
2 |
+
|
3 |
+
Fabio Valerio Massoli *fmassoli@qti.qualcomm.com* Qualcomm AI Research∗
|
4 |
+
Christos Louizos *clouizos@qti.qualcomm.com* Qualcomm AI Research∗
|
5 |
+
Arash Behboodi abehboodi@qti.qualcomm.com Qualcomm AI Research∗
|
6 |
+
Reviewed on OpenReview: *https: // openreview. net/ forum? id= AQk0UsituG*
|
7 |
+
|
8 |
+
## Abstract
|
9 |
+
|
10 |
+
Compressed sensing combines the power of convex optimization techniques with a sparsityinducing prior on the signal space to solve an underdetermined system of equations. For many problems, the sparsifying dictionary is not directly given, nor its existence can be assumed.
|
11 |
+
|
12 |
+
Besides, the sensing matrix can change across different scenarios. Addressing these issues requires solving a sparse representation learning problem, namely dictionary learning, taking into account the epistemic uncertainty of the learned dictionaries and, finally, jointly learning sparse representations and reconstructions under varying sensing matrix conditions. We address both concerns by proposing a variant of the LISTA architecture. First, we introduce Augmented Dictionary Learning ISTA (A-DLISTA), which incorporates an augmentation module to adapt parameters to the current measurement setup. Then, we propose to learn a distribution over dictionaries via a variational approach, dubbed Variational Learning ISTA (VLISTA). VLISTA exploits A-DLISTA as the likelihood model and approximates a posterior distribution over the dictionaries as part of an unfolded LISTA-based recovery algorithm. As a result, VLISTA provides a probabilistic way to jointly learn the dictionary distribution and the reconstruction algorithm with varying sensing matrices. We provide theoretical and experimental support for our architecture and show that our model learns calibrated uncertainties.
|
13 |
+
|
14 |
+
## 1 Introduction
|
15 |
+
|
16 |
+
By imposing a prior on the signal structure, compressed sensing solves underdetermined inverse problems. Canonical examples of signal structure and sensing medium are sparsity and linear inverse problems.
|
17 |
+
|
18 |
+
Compressed sensing aims at reconstructing an unknown signal of interest, s ∈ R
|
19 |
+
n, from a set of linear measurements, y ∈ R
|
20 |
+
m, acquired by means of a linear transformation, Φ ∈ R
|
21 |
+
m×n where *m < n*. Due to the underdetermined nature of the problem, s is typically assumed to be sparse in a given basis. Hence, s = Ψx, where Ψ ∈ R
|
22 |
+
n×bis a matrix whose columns represent the sparsifying basis vectors, and x ∈ R
|
23 |
+
bis the sparse representation of s. Therefore, given noiseless observations y = Φs, of an unknown signal, s = Ψx, we seek to solve the LASSO problem:
|
24 |
+
|
25 |
+
$$\operatorname*{argmin}_{\mathbf{x}}\|\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}\mathbf{x}\|_{2}^{2}+\rho\|\mathbf{x}\|_{1}$$
|
26 |
+
x
|
27 |
+
2 + ρ∥x∥1 (1)
|
28 |
+
where ρ is a constant scalar controlling the sparsifying penalty. Iterative algorithms, such as Iterative Soft-Thresholding Algorithm (ISTA) (Daubechies et al., 2004), represent a popular approach to solving
|
29 |
+
∗Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
|
30 |
+
|
31 |
+
$$(1)$$
|
32 |
+
|
33 |
+
such problems. A number of studies have been conducted to improve compressed sensing solvers. A typical approach involves unfolding iterative algorithms as layers of neural networks and learning parameters end-toend (Gregor & LeCun, 2010). Such ML algorithms are typically trained by minimizing the reconstruction objective:
|
34 |
+
|
35 |
+
$${\mathcal{L}}(\mathbf{x},{\hat{\mathbf{x}}}_{T})=\mathbb{E}_{\mathbf{x}\sim{\mathcal{D}}}[\|\mathbf{x}-{\hat{\mathbf{x}}}_{T}\|_{2}^{2}]$$
|
36 |
+
$$(2)$$
|
37 |
+
] (2)
|
38 |
+
where the expected value is taken over data sampled from D, and the subscript "T" refers to the last step, or layer, of the unfolded model. Variable sensing matrices and unknown sparsifying dictionaries are some of the main challenges of data-driven approaches. By learning a dictionary and including it in optimization iterations, the work in Aberdam et al. (2021); Schnoor et al. (2022) aims to address these issues. However, data samples might not have exact sparse representations, so no ground truth dictionary is available. The issue can be more severe for heterogeneous datasets where the dictionary choice might vary from one sample to another. A real-world example would be the channel estimation problem in a Multi-Input Multi-Output
|
39 |
+
(MIMO) mmwave wireless communication system (Rodríguez-Fernández et al., 2018). Such a problem can be cast as an inverse problem of the form y = ΦΨx and solved using compressive sensing techniques. The sensing matrix, Φ, represents the so-called *beamforming matrix* while the dictionary, Ψ, represents the sparsifying basis for the wireless channel itself. Typically, Φ changes from one set of measurements to the next and the channel model might require different basis vectors across time. Adaptive acquisition is another example of application: in MRI image reconstruction, the acquisition step can be adaptive. Here, the sensing matrix is sampled from a known distribution to reconstruct the signal. Therefore, given the adaptive nature of the process, each data sample is characterized by a different Φ (Bakker et al., 2020; Yin et al., 2021).
|
40 |
+
|
41 |
+
Our Contribution A principled approach to this problem would be to leverage a Bayesian framework and define a distribution over the dictionaries with proper uncertainty quantification. We follow two steps to accomplish this goal. First, we introduce Augmented Dictionary Learning ISTA (A-DLISTA), an augmented version of the Learning Iterative Soft-Thresholding Algorithm (LISTA)-like model, capable of adapting some of its parameters to the current measurement setup. We theoretically motivate its design and empirically prove its advantages compared to other non-adaptive LISTA-like models in a non-static measurement scenario, i.e., considering varying sensing matrices. Finally, to learn a distribution over dictionaries, we introduce Variational Learning ISTA (VLISTA), a variational formulation that leverages A-DLISTA as the likelihood model. VLISTA refines the dictionary iteratively after each iteration based on the outcome of the previous layer. Intuitively, our model can be understood as a form of a recurrent variational autoencoder, e.g., Chung et al. (2015), where at each iteration of the algorithm we have an approximate posterior distribution over the dictionaries conditioned on the outcome of the previous iteration. Moreover, VLISTA provides uncertainty estimation to detect Out-Of-Distribution (OOD) samples. We train A-DLISTA using the same objective as in Equation 2 while for VLISTA we maximize the ELBO (Equation 15). We refer the reader to Appendix D for the detailed derivation of the ELBO. Behrens et al. (2021) proposed an augmented version of LISTA, termed Neurally Augmented ALISTA (NALISTA). However, there are key differences with A-DLISTA. In contrast to NALISTA, our model adapts some of its parameters to the current sensing matrix and learned dictionary.
|
42 |
+
|
43 |
+
Hypothetically, NALISTA could handle varying sensing matrices. However, that comes at the price of solving, for each datum, the inner optimization step to evaluate the W matrix. Finally, while NALISTA uses an LSTM as augmentation network, A-DLISTA employs a convolutional neural network (shared across all layers).
|
44 |
+
|
45 |
+
Such a difference reflects the type of dependencies between layers and input data that the networks try to model. We report in Appendix B and Appendix C detailed discussions about the theoretical motivation and architectural design for A-DLISTA.
|
46 |
+
|
47 |
+
Our work's main contributions can be summarized as follows:
|
48 |
+
- We design an augmented version of a LISTA-like type of model, dubbed A-DLISTA, that can handle non-static measurement setups, i.e. per-sample sensing matrices, and adapt parameters to the current data instance.
|
49 |
+
|
50 |
+
- We propose VLISTA that learns a distribution over sparsifying dictionaries. The model can be interpreted as a Bayesian LISTA model that leverages A-DLISTA as the likelihood model.
|
51 |
+
|
52 |
+
- VLISTA adapts the dictionary to optimization dynamics and therefore can be interpreted as a hierarchical representation learning approach, where the dictionary atoms gradually permit more refined signal recovery.
|
53 |
+
|
54 |
+
- The dictionary distributions can be used successfully for out-of-distribution sample detection.
|
55 |
+
|
56 |
+
The remaining part of the paper is organized as follows. In section 2 we briefly report related works relevant to the current research. In section 3 and section 4 we introduce some background notions and details of our model formulations, respectively. Datasets, baselines, and experimental results are described in section 5.
|
57 |
+
|
58 |
+
Finally, we draw our conclusion in section 6.
|
59 |
+
|
60 |
+
## 2 Related Works
|
61 |
+
|
62 |
+
In compressed sensing, recovery algorithms have been extensively analyzed theoretically and numerically
|
63 |
+
(Foucart & Rauhut, 2013). One of the most prominent approaches is using iterative algorithms, such as: ISTA
|
64 |
+
(Daubechies et al., 2004), Approximate message passing (AMP) (Donoho et al., 2009) Orthogonal Matching Pursuit (OMP) (Pati et al., 1993; Davis et al., 1994), and Iterative Hard-Thresholding Algorithm (IHTA)
|
65 |
+
(Blumensath & Davies, 2009). These algorithms have associated hyperparameters, including the number of iterations and soft threshold, which can be adjusted to better balance performance and complexity. With unfolding iterative algorithms as layers of neural networks, these parameters can be learned in an end-to-end fashion from a dataset see, for instance, some variants Zhang & Ghanem (2018); Metzler et al. (2017); yang et al. (2016); Borgerding et al. (2017); Sprechmann et al. (2015). In previous studies by Zhou et al. (2009; 2012), a non-parametric Bayesian method for dictionary learning was presented. The authors focused on a fully Bayesian joint compressed sensing inversion and dictionary learning, where the dictionary atoms were drawn and fixed beforehand. Bayesian compressive sensing (BCS) (Ji et al., 2008) uses relevance vector machines (RVMs) (Tipping, 2001) and a hierarchical prior to model distributions of each entry. This line of work quantifies the uncertainty of recovered entries while assuming a fixed dictionary. Our current work differs by accounting for uncertainty in the unknown dictionary by defining a distribution over it. Learning ISTA was initially introduced by Gregor & LeCun (2010). Since then, many works have followed, including those by Behrens et al. (2021); Liu et al. (2019); Chen et al. (2021); Wu et al. (2020). These subsequent works provide guidelines for improving LISTA, for example, in convergence, parameter efficiency, step size and threshold adaptation, and overshooting. However, they assume fixed and known sparsifying dictionaries and sensing matrices. Researches by Aberdam et al. (2021); Behboodi et al. (2022); Schnoor et al. (2022)
|
66 |
+
have explored ways to relax these assumptions, including developing models that can handle varying sensing matrices and learn dictionaries. The authors in Schnoor et al. (2022); Behboodi et al. (2022) provide an architecture that can both incorporate varying sensing matrices and learn dictionaries. However, their focus is on the theoretical analysis of the model. Furthermore, there are theoretical studies on the convergence and generalization of unfolded networks, see for example: Giryes et al. (2018); Pu et al. (2022); Aberdam et al.
|
67 |
+
|
68 |
+
(2021); Pu et al. (2022); Chen et al. (2018); Behboodi et al. (2022); Schnoor et al. (2022). Our paper builds on these ideas by modelling a distribution over dictionaries and accounting for epistemic uncertainty. Previous studies have explored theoretical aspects of unfolded networks, such as convergence and generalization, and we contribute to this body of research by considering the impact of varying sensing matrices and dictionaries.
|
69 |
+
|
70 |
+
The framework of variational autoencoders (VAEs) enables the learning of a generative model through latent variables (Kingma & Welling, 2013; Rezende et al., 2014). When there are data-sample-specific dictionaries in our proposed model, it reminisces extensions of VAEs to the recurrent setting (Chung et al., 2015; 2016),
|
71 |
+
which assumes a sequential structure in the data and imposes temporal correlations between the latent variables. Additionally, there are connections and similarities to Markov state-space models, such as the ones described in Krishnan et al. (2017). By using global dictionaries in VLISTA, the model becomes a variational Bayesian Recurrent Neural Network. Variational Bayesian neural networks were first introduced in Blundell et al. (2015), with independent priors and variational posteriors for each layer. This work has been extended to recurrent settings in Fortunato et al. (2019). The main difference between these works and our setting is the prior and variational posterior. At each step, the prior and variational posterior are conditioned on previous steps instead of being fixed across steps.
|
72 |
+
|
73 |
+
## 3 Background 3.1 Sparse Linear Inverse Problems
|
74 |
+
|
75 |
+
We consider linear inverse problems of the form: y = Φs, where we have access to a set of linear measurements y ∈ R
|
76 |
+
m of an unknown signal s ∈ R
|
77 |
+
n, acquired through the forward operator Φ ∈ R
|
78 |
+
m×n. Typically, in compressed sensing literature, Φ is called the sensing, or measurements, matrix and it represents an underdetermined system of equations for *m < n*. The problem of reconstructing s from (y, Φ) is ill-posed due to the shape of the forward operator. To uniquely solve for s, the signal is assumed to admit a sparse representation, x ∈ R
|
79 |
+
b, in a given basis, {ei ∈ R
|
80 |
+
n}
|
81 |
+
b i=0. The ei vectors are called *atoms* and are collected as the columns of a matrix Ψ ∈ R
|
82 |
+
n×btermed the *sparsifying dictionary*. Therefore, the problem of estimating s, given a limited number of observations y through the operator Φ, is translated to a sparse recovery problem:
|
83 |
+
x
|
84 |
+
∗ = arg minx∥x∥0 s.t. y = ΦΨx. Given that the l0 pseudo-norm requires solving an NP-hard problem, the l1 norm is used instead as a convex relaxation of the problem. A proximal gradient descent-based approach for solving the problem yields the ISTA algorithm (Daubechies et al., 2004; Beck & Teboulle, 2009):
|
85 |
+
|
86 |
+
$$\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}\mathbf{\Psi})^{T}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}\mathbf{x}_{t-1})\right),$$
|
87 |
+
$$\left({\boldsymbol{3}}\right)$$
|
88 |
+
T(y − ΦΨxt−1), (3)
|
89 |
+
where t is the index of the current iteration, xt (xt−1) is the reconstructed sparse vector at the current
|
90 |
+
(previous) layer, and θt and γt are the *soft-threshold* and *step size* hyperparameters, respectively. Specifically, θt characterizes the *soft-threshold function* given by: ηθt
|
91 |
+
(x) = sign(x)(|x| − θt)+. In the ISTA formulation, those two parameters are shared across all the iterations: γt, θt → *γ, θ*. In what follows, we use the terms
|
92 |
+
"layers" and "iterations" interchangeably when describing ISTA and its variations.
|
93 |
+
|
94 |
+
## 3.2 Lista
|
95 |
+
|
96 |
+
LISTA (Gregor & LeCun, 2010) is an unfolded version of the ISTA algorithm in which each iteration is parametrized by learnable matrices. Specifically, LISTA reinterprets Equation 3 as defining the layer of a feed-forward neural network implemented as Sθt
|
97 |
+
(Vtxt−1 + Wty) where Vt,Wt are learnt from a dataset. In that way, those weights implicitly contain information about Φ and Ψ that are assumed to be fixed. As LISTA, also its variations, e.g., Analytic LISTA (ALISTA) (Liu et al., 2019), NALISTA (Behrens et al., 2021)
|
98 |
+
and HyperLISTA (Chen et al., 2021), require similar constraints such as a fixed dictionary and sensing matrix to reach the best performance. However, there are situations where one or none of the conditions are met
|
99 |
+
(see examples in section 1).
|
100 |
+
|
101 |
+
## 4 Method 4.1 Augmented Dictionary Learning Ista (A-Dlista)
|
102 |
+
|
103 |
+
To deal with situations where Ψ is unknown and Φ is changing across samples, one can unfold the ISTA
|
104 |
+
algorithm and re-parametrize the dictionary as a learnable matrix. Such an algorithm is termed Dictionary Learning ISTA (DLISTA) (Pezeshki et al., 2022; Behboodi et al., 2022; Aberdam et al., 2021) and, similarly to Equation 3, each layer is formulated as:
|
105 |
+
|
106 |
+
$$\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}\mathbf{\Psi}_{t})^{T}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})\right),\tag{1}$$
|
107 |
+
|
108 |
+
with one last linear layer mapping x to the reconstructed signal s. The model can be trained end-to-end to learn all θt, γt, Ψt. Differently from ISTA (Equation 3), DLISTA (Equation 4) learns a dictionary specific for each layer, indicated by the subscript "t". The model can be trained end-to-end to learn all θt, γt, Ψt.
|
109 |
+
|
110 |
+
The base model is similar to (Behboodi et al., 2022; Aberdam et al., 2021). However, it requires additional changes. Consider the t-th layer of DLISTA with the varying sensing matrix Φk and define the following
|
111 |
+
|
112 |
+
$$\left(4\right)$$
|
113 |
+
|
114 |
+
parameters:
|
115 |
+
|
116 |
+
$$\tilde{\mu}(t,\mathbf{\Phi}^{k}):=\max_{1\leq i\neq j\leq N}\left|\left(\left(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\right)_{i}\right)^{\top}\left(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\right)_{j}\right|\tag{5}$$ $$\tilde{\mu}_{2}(t,\mathbf{\Phi}^{k}):=\max_{1\leq i\leq N}\left|\left(\left(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\right)_{i}\right)^{\top}\left(\mathbf{\Phi}^{k}(\mathbf{\Psi}_{t}-\mathbf{\Psi}_{o})\right)_{j}\right|$$ (6) $$\delta(\gamma,t,\mathbf{\Phi}^{k}):=\max_{t}\left|1-\gamma\left|\left(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\right)_{i}\right|\right|\right|\tag{7}$$
|
117 |
+
|
118 |
+
where µ˜ is the *mutual coherence* of ΦkΨt (Foucart & Rauhut, 2013, Chapter 5) and µ˜2 is closely connected to generalized mutual coherence (Liu et al., 2019). However, in contrast to the generalized mutual coherence, µ˜2 includes the diagonal inner product for i = j. Finally, δ(·) is reminiscent of the restricted isometry property
|
119 |
+
(RIP) constant (Foucart & Rauhut, 2013), a key condition for many recovery guarantees in compressed sensing. When the columns of the matrix ΦkΨt are normalized, the choice of γ = 1 yields δ(*γ, t,* Φk) = 0.
|
120 |
+
|
121 |
+
The following proposition provides conditions on each layer to improve the reconstruction error.
|
122 |
+
|
123 |
+
Proposition 4.1. *Suppose that* y k = ΦkΨox∗, where x∗ *is the ground truth sparse vector with support* supp(x∗) = S, and Ψo *is the ground truth dictionary. For DLISTA iterations given as*
|
124 |
+
|
125 |
+
$$\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})^{T}(\mathbf{y}^{k}-\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})\right),$$
|
126 |
+
k − ΦkΨtxt−1), (8)
|
127 |
+
we have:
|
128 |
+
1. If for all t, the pairs (θt, γt, Ψt) *satisfy*
|
129 |
+
|
130 |
+
$$({\boldsymbol{\delta}})$$
|
131 |
+
|
132 |
+
$\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t},$
|
133 |
+
$$({\mathfrak{g}})$$
|
134 |
+
|
135 |
+
then there is no false positive in each iteration. In other words, for all t, we have supp(xt) ⊆ *supp*(x∗).
|
136 |
+
|
137 |
+
2. *Assuming that the conditions of the last step hold, then we get the following bound on the error:*
|
138 |
+
|
139 |
+
$\|\mathbf{x}_{t}-\mathbf{x}_{*}\|_{1}\leq\left(\delta(\gamma_{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)\|\mathbf{x}_{t-1}-\mathbf{x}_{*}\|_{1}+\gamma_{t}\tilde{\mu}_{2}|S|\,\|\mathbf{x}_{*}\|_{1}+|S|\theta_{t}$.
|
140 |
+
We provide the derivation of Proposition 4.1 together with additional theoretical results in Appendix B.
|
141 |
+
|
142 |
+
![4_image_0.png](4_image_0.png)
|
143 |
+
|
144 |
+
Proposition 4.1 provides insights about the choice of γt and θt, and also suggests that (δ(γt) + γtµ˜(|S| − 1))
|
145 |
+
needs to be smaller than one to reduce the error at each step.
|
146 |
+
|
147 |
+
Figure 1: Models architectures. **Left:** A-DLISTA architecture. Each blue block represents a single ISTA-like iteration parametrized by the dictionary Ψt, the threshold and step size {θ i t
|
148 |
+
, γi t}. The red blocks represent the augmentation network (with shared parameters across layers) that adapts {θ i t
|
149 |
+
, γi t} for layer t based on the dictionary Ψt and the current measurement setup Φifor the i−th data sample. **Right:** VLISTA
|
150 |
+
(inference) architecture. The red and blue blocks correspond to the same operations as for A-DLISTA. The pink blocks represent the posterior model used to refine the dictionary based on input data {y i, Φi} and the sparse vector reconstructed at layer t, xt. Upon examining Proposition 4.1, it becomes evident that γt and θt play a key role in the convergence of the algorithm. However, there is a trade-off to consider when making these choices. For instance, suppose we set Algorithm 1 Augmented Dictionary Learning ISTA
|
151 |
+
(A-DLISTA) - Inference Algorithm Require: D = {(y i, Φi)}
|
152 |
+
N−1 i=0 - the sensing matrix changes across samples; augmentation model fΘ
|
153 |
+
x i 0 ← 0 for t = 1*, . . . , T* do
|
154 |
+
(θ i t
|
155 |
+
, γi t
|
156 |
+
) ← fΘ(Φi, Ψt) ▷ Augmentation step g ← y i − (ΦiΨt)
|
157 |
+
T ΦiΨtx i t−1 u ← x i t−1 + γ i t g x i t = ηθ i t
|
158 |
+
(u)
|
159 |
+
end for return x i T
|
160 |
+
;
|
161 |
+
Algorithm 2 Variational Learning ISTA (VLISTA) -
|
162 |
+
Inference Algorithm Require: D = {(y i, Φi)}
|
163 |
+
N−1 i=0 - the sensing matrix changes across samples; augmentation model fΘ;
|
164 |
+
posterior model fϕ x i 0 ← 0 for t = 1*, . . . , T* do
|
165 |
+
(µt,σ 2t) ← fϕ(x i t−1
|
166 |
+
, y i, Φi) ▷ Posterior parameters estimation Ψt ∼ N (Ψt|µt,σ 2t) ▷ Dictionary sampling
|
167 |
+
(θ i t
|
168 |
+
, γi t
|
169 |
+
) ← fΘ(Φi, Ψt) ▷ Augmentation step g ← y i − (ΦiΨt)
|
170 |
+
T ΦiΨtx i t−1 u ← x i t−1 + γ i t g x i t = ηθ i t
|
171 |
+
(u)
|
172 |
+
end for return x i T
|
173 |
+
;
|
174 |
+
θt and decrease γt. In that case, we may ensure good support selection, but it could also increase δ(γt). In situations where the sensing matrix remains fixed, the network can possibly learn optimal choices through end-to-end training. However, when the sensing matrix Φ differs across various data samples (i.e., Φ → Φi),
|
175 |
+
it is no longer guaranteed that there exists a unique choice of γt and θt for all Φi. Since these parameters can be determined when Φ and Ψt are fixed, we suggest utilizing an augmentation network to determine γt and θt from each pair of Φi and Ψt. For a more thorough theoretical analysis, please refer to Appendix B.
|
176 |
+
|
177 |
+
We show in Figure 1 (left plot) the resulting A-DLISTA model. At each layer, A-DLISTA performs two basic operations, namely, soft-threshold (blue blocks in Figure 1) and augmentation (red blocks in Figure 1).
|
178 |
+
|
179 |
+
The former represents an ISTA-like iteration parametrized by the set of weights {Ψt, θi t
|
180 |
+
, γi t}, whilst the latter is implemented using a convolutional neural network. As shown in the figure, the augmentation network takes as input the sensing matrix for the given data sample, Φi, together with the dictionary learned at the layer for which the augmentation model will generate the θ i t and γ i t parameters: (θ i t
|
181 |
+
, γi t
|
182 |
+
) = fΘ(Φi, Ψt), where Θ are the augmentation models' trainable parameters. Through such an operation, A-DLISTA adapts the soft-threshold and step size of each layer to the current data sample. The inference algorithmic description of A-DLISTA is given in Algorithm 1. We report more details about the augmentation network in the supplementary materials (Appendix C).
|
183 |
+
|
184 |
+
## 4.2 Variational Learning Ista
|
185 |
+
|
186 |
+
Although A-DLISTA possesses adaptivity to data samples, it still assumes the existence of a ground truth dictionary. We relax such a hypothesis by defining a probability distribution over Ψt and formulating a variational approach, titled VLISTA, to solve the dictionary learning and sparse recovery problems jointly. To forge our variational framework whilst retaining the helpful adaptivity property of A-DLISTA, we re-interpret the soft-thresholding layers of the latter as part of a likelihood model defining the output mean for the reconstructed signal. Given its recurrent-like structure (Chung et al., 2015), we equip VLISTA with a conditional trainable prior where the condition is given by the dictionary sampled at the previous iteration.
|
187 |
+
|
188 |
+
Therefore, the full model comprises three components, namely, the conditional prior pξ(·), the variational posterior qϕ(·), and the likelihood model, pΘ(·). All components are parametrized by neural networks whose outputs represent the parameters for the underlying probability distribution. In what follows, we describe more in detail the various building blocks of the VLISTA model.
|
189 |
+
|
190 |
+
## 4.2.1 Prior Model
|
191 |
+
|
192 |
+
The conditional prior, pξ(Ψt|Ψt−1), is modelled as a Gaussian distribution whose parameters are conditioned on the previously sampled dictionary. We implement pξ(·) as a neural network, fξ(·) = [f µ ξ1
|
193 |
+
◦gξ0
|
194 |
+
(·), fσ 2 ξ2
|
195 |
+
◦gξ0
|
196 |
+
(·)],
|
197 |
+
with trainable parameters ξ = {ξ0, ξ1, ξ2}. The model's architecture comprises a shared convolutional block followed by two branches generating the Gaussian distribution's mean and standard deviation, respectively.
|
198 |
+
|
199 |
+
Therefore, at layer t, the prior conditional distribution is given by: pξ(Ψt|Ψt−1) = Qi,j N (Ψt;i,j |µt;i,j =
|
200 |
+
f µ ξ1
|
201 |
+
(gξ0
|
202 |
+
(Ψt−1))i,j ; σt;i,j = f σ 2 ξ2
|
203 |
+
(gξ0
|
204 |
+
(Ψt−1))i,j ), where the indices *i, j* run over the rows and columns of Ψt. To simplify our expressions, we will abuse notation and refer to distributions like the former one as:
|
205 |
+
|
206 |
+
$$p_{\xi}(\mathbf{\Psi}_{t}|\mathbf{\Psi}_{t-1})={\cal N}(\mathbf{\Psi}_{t}|\mathbf{\mu}_{t};\mathbf{\sigma}^{2}{}_{t}),\quad\mbox{where}\tag{10}$$ $\mathbf{\mu}_{t}=f_{\xi_{1}}^{\mu}(g_{\xi_{0}}(\mathbf{\Psi}_{t-1}));\quad\mathbf{\sigma}^{2}{}_{t}=f_{\xi_{2}}^{\sigma^{2}}(g_{\xi_{0}}(\mathbf{\Psi}_{t-1}))$
|
207 |
+
We will use the same type of notation throughout the rest of the manuscript to simplify formulas. The prior design allows for enforcing a dependence of the dictionary at iteration t to the one sampled at the previous iteration. Thus allowing us to refine Ψt as the iterations proceed. The only exception is the prior imposed over the dictionary at t = 1, where there is no previously sampled dictionary. To handle this exception, we assume a standard Gaussian distributed Ψ1. The joint prior distribution over the dictionaries for VLISTA is given by:
|
208 |
+
|
209 |
+
$$p_{\xi}(\Psi_{1:T})={\cal N}(\Psi_{1}|{\bf0};{\bf1})\prod_{t=2}^{T}{\cal N}(\Psi_{t}|\mu_{t};\sigma^{2}{}_{t})\tag{1}$$
|
210 |
+
$$(11)$$
|
211 |
+
$$(12)$$
|
212 |
+
|
213 |
+
where µt and σ 2t are defined in Equation 10.
|
214 |
+
|
215 |
+
## 4.2.2 Posterior Model
|
216 |
+
|
217 |
+
Similarly to the prior model, the variational posterior too is modelled as a Gaussian distribution parametrized by a neural network fϕ(·) = [f µ ϕ1
|
218 |
+
◦ hϕ0
|
219 |
+
(·), fσ 2 ϕ2
|
220 |
+
◦ hϕ0
|
221 |
+
(·)] that outputs the mean and variance for the underlying probability distribution
|
222 |
+
|
223 |
+
$$q_{\phi}(\mathbf{\Psi}_{t}|\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i})={\cal N}(\mathbf{\Psi}_{t}|\mathbf{\mu}_{t};\mathbf{\sigma}^{2}{}_{t}),\quad\mathrm{where}$$ $$\mathbf{\mu}_{t}=f_{\phi_{1}}^{\mu}(h_{\phi_{0}}(\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i}));\quad\quad\mathbf{\sigma}^{2}{}_{t}=f_{\phi_{2}}^{\sigma^{2}}(h_{\phi_{0}}(\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i}))$$
|
224 |
+
|
225 |
+
The posterior distribution for the dictionary, Ψt, at layer t is conditioned on the data, {y i, Φi}, as well as on the reconstructed signal at the previous layer, xt−1. Therefore, the joint posterior probability over the dictionaries is given by:
|
226 |
+
|
227 |
+
$$q_{\phi}(\mathbf{\Psi}_{1:T}|\mathbf{x}_{1:T},\mathbf{y}^{i},\mathbf{\Phi}^{i})=\prod_{t=1}^{T}q_{\phi}(\mathbf{\Psi}_{t}|\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i})\tag{1}$$
|
228 |
+
$$(13)$$
|
229 |
+
|
230 |
+
When considering our selection of Gaussian distributions for our prior and posterior models, we prioritized computational and implementation convenience. However, it's important to note that our framework is not limited to this distribution family. As long as the distributions used are reparametrizable (Kingma & Welling, 2013), meaning that we can obtain gradients of random samples with respect to their parameters and we can evaluate and differentiate their density, VLISTA can support any flexible distribution family. This includes mixtures of Gaussians to incorporate heavier tails and distributions resulting from normalizing flows (Rezende & Mohamed, 2015).
|
231 |
+
|
232 |
+
## 4.2.3 Likelihood Model
|
233 |
+
|
234 |
+
The soft-thresholding block of A-DLISTA is at the heart of the reconstruction module. Similarly to the prior and posterior, the likelihood distribution is modelled as a Gaussian parametrized by the output of an A-DLISTA block. In particular, the network generates the mean vector for the Gaussian distribution while we treat the standard deviation as a tunable hyperparameter. By combining these elements, we can formulate the joint log-likelihood distribution as:
|
235 |
+
|
236 |
+
![7_image_0.png](7_image_0.png)
|
237 |
+
|
238 |
+
Figure 2: VLISTA graphical model. Dependencies on y i and Φi are factored out for simplicity.
|
239 |
+
|
240 |
+
The sampling is done only based on the posterior qϕ(Ψt|xt−1, y i, Φi). Dashed lines represent variational approximations.
|
241 |
+
|
242 |
+
$$\log\,p_{\Theta}(\mathbf{x}_{1:T}=\mathbf{x}_{gt}^{i}|\mathbf{\Psi}_{1:T},\mathbf{y}^{i},\mathbf{\Phi}^{i})=\sum_{t=1}^{T}\log\mathcal{N}(\mathbf{x}_{gt}^{i}|\mathbf{\mu}_{t},\mathbf{\sigma}^{2}_{t}),\quad\text{where}\tag{14}$$ $$\mathbf{\mu}_{t}=\text{A-DLISTA}(\mathbf{\Psi}_{t},\mathbf{x}_{t-1},\mathbf{y}^{i},\mathbf{\Phi}^{i};\Theta);\quad\mathbf{\sigma}^{2}_{t}=\delta$$
|
243 |
+
|
244 |
+
where δ is a hyperparameter of the network, x i gt represents the ground truth value for the underlying unknown sparse signal for the i-th data sample, Θ is the set of A-DLISTA's parameters, and pΘ(x1:T = x i gt|·) represents the likelihood for the ground truth x i gt, at each time step t ∈ [1, T], under the likelihood model given the current reconstruction. Note that in Equation 14 we use the same x i gt through the entire sequence t ∈ [1, T].
|
245 |
+
|
246 |
+
We report in Figure 1 (right plot) the inference architecture for VLISTA. It is crucial to note that during inference VLISTA does not require the prior model which is used for training only. Additionally, its graphical model and algorithmic description are shown in Figure 2 and Algorithm 2, respectively. We report further details concerning the architecture of the models and the objective function in the supplementary material
|
247 |
+
(Appendix C and Appendix D).
|
248 |
+
|
249 |
+
## 4.2.4 Training Objective
|
250 |
+
|
251 |
+
Our approach involves training all VLISTA components in an end-to-end fashion. To accomplish that we maximize the Evidence Lower Bound (ELBO):
|
252 |
+
|
253 |
+
ELBO = X T t=1 EΨ1:t∼qϕ(Ψ1:t|x0:t−1,Di ) hlog pΘ(xt = x i gt|Ψ1:t, Di) i(15) − X T t=2 EΨ1:t−1∼qϕ(Ψ1:t−1|xt−1,Di ) hDKLqϕ(Ψt|xt−1, Di) ∥ pξ(Ψt|Ψt−1) i − DKLqϕ(Ψ1|x0, Di) ∥ p(Ψ1)
|
254 |
+
As we can see from Equation 15, the ELBO comprises three terms. The first term is the sum of expected log-likelihoods of the target signal at each time step. The second term is the sum of KL divergences between the approximate posterior and the prior at each time step. The third term is the KL divergence between the approximate posterior at the initial time step and a prior. In our implementation, we set to "T" the number of layers and initialize the input signal to zero.
|
255 |
+
|
256 |
+
To evaluate the likelihood contribution in Equation 15, we marginalize over dictionaries sampled from the posterior qϕ(Ψ1:t|x0:t−1, Di). In contrast, the last two terms in the equation represent the KL divergence contribution between the prior and posterior distributions. It's worth noting that the prior in the last term is not conditioned on the previously sampled dictionary, given that pξ(Ψ1) → p(Ψ1) = N (Ψ1|0; 1) (refer to Equation 10 and Equation 11). We refer the reader to Appendix D for the derivstion of the ELBO.
|
257 |
+
|
258 |
+
## 5 Experiments 5.1 Datasets And Baselines
|
259 |
+
|
260 |
+
We evaluate our models' performance by comparing them against classical and ML-based baselines on three datasets: MNIST, CIFAR10, and a synthetic dataset. Concerning the synthetic dataset, we follow a similar approach as in Chen et al. (2018); Liu & Chen (2019); Behrens et al. (2021). However, in contrast to the mentioned works, we generate a different Φ matrix for each datum by sampling i.i.d. entries from a standard Gaussian distribution. We generate the ground truth sparse signals by sampling the entries from a standard Gaussian and setting each entry to be non-zero with a probability of 0.1. We generate 5K samples and use 3K for training, 1K for model selection, and 1K for testing. Concerning the MNIST and CIFAR10, we train the models using the full images, without applying any crop. For CIFAR10, we gray-scale and normalize the images. We generate the corresponding observations, y i, by multiplying each sensing matrix with the ground truth image: y i = Φis i. We compare the A-DLISTA and VLISTA models against classical and ML baselines.
|
261 |
+
|
262 |
+
Our classical baselines use the ISTA algorithm, and we pre-compute the dictionary by either considering the canonical or the wavelet basis or using the SPCA algorithm. Our ML baselines use different unfolded learning versions of ISTA, such as LISTA. To demonstrate the benefits of adaptivity, we perform an ablation study on A-DLISTA by removing its augmentation network and making the parameters θt, γt learnable only through backpropagation. We refer to the non-augmented version of A-DLISTA as DLISTA. Therefore, for DLISTA,
|
263 |
+
θt and γt cannot be adapted to the specific input sensing matrix. Moreover, we consider BCS (Ji et al.,
|
264 |
+
2008) as a specific Bayesian baseline for VLISTA. Finally, we conduct Out-Of-Distribution (OOD) detection experiments. We fixed the number of layers to three for all ML models to compare their performance. The classical baselines do not possess learnable parameters. Therefore, we performed an extensive grid search to find the best hyperparameters for them. More details concerning the training procedure and ablation studies can be found in Appendix D and Appendix F.
|
265 |
+
|
266 |
+
Regarding the synthetic dataset, we evaluate models performance by computing the median of the c.d.f. for the reconstruction NMSE (Figure 3). A-DLISTA's adaptivity appears to offer an advantage over other models. However, concerning VLISTA, we observe a drop in performance. Such a behaviour is consistent across experiments and can be attributed to a few factors. One possible reason for the drop in performance is the noise introduced during training due to the random sampling procedure used to generate the dictionary. Additionally, the amortization gap that affects all models based on amortized variational inference (Cremer et al.,
|
267 |
+
2018) can also contribute to this effect. Despite this, VLISTA still performs comparably to BCS.
|
268 |
+
|
269 |
+
Lastly, we note that ALISTA and NALISTA
|
270 |
+
do not perform as well as other models. This is likely due to the optimization procedure these two models require to evaluate the weight matrix W. The computation of the W matrix requires a fixed Figure 3: NMSE's median. The y-axes is in dB (the lower the better) for a different number of measurements
|
271 |
+
(x-axes).
|
272 |
+
|
273 |
+
## 5.2 Synthetic Dataset
|
274 |
+
|
275 |
+
sensing matrix, a condition not satisfied in the current setup. Regarding non-static measurements, we averaged across multiple Φi, thus obtaining a non-optimal W matrix. To support our hypothesis, we report in Appendix F results considering a static measurement scenario for which ALISTA and NALISTA report very high performance.
|
276 |
+
|
277 |
+
## 5.3 Image Reconstruction - Mnist & Cifar10
|
278 |
+
|
279 |
+
When evaluating different models on the MNIST and CIFAR10 datasets, we use the Structural Similarity Index Measure (SSIM) to measure their performance. As for the synthetic dataset, we experience strong instabilities in ALISTA and NALISTA training due to non-static measurement setup. Therefore, we do not provide results for these models. It is important to note that the poor performance of ALISTA and NALISTA
|
280 |
+
is a result of our specific experiment setup, which differs from the static case considered in the formulation of these models. We refer to Appendix F for results using a static measurements scenario. By looking at the results in Table 1 and Table 2, we can draw similar conclusions as for the synthetic dataset. Additionally, we report results from three classical baselines (subsection 5.1). Among non-Bayesian models, A-DLISTA shows the best results. Furthermore, by comparing A-DLISTA with its non-augmented version, DLISTA, one can notice the benefits of using an augmentation network to make the model adaptive. Concerning Bayesian approaches, VLISTA outperforms BCS. However, it is important to note that BC does not have trainable parameters, unlike VLISTA. Therefore, the higher performance of VLISTA comes at the price of an expensive training procedure. Similar to the synthetic dataset, VLISTA exhibits a drop in performance compared to A-DLISTA for MNIST and CIFAR10.
|
281 |
+
|
282 |
+
Table 1: MNIST SSIM (the higher the better) for different number of measurements. First three rows correspond to "classical" baselines. We highlight in bold the best performance for Bayes and Non-Bayes models.
|
283 |
+
|
284 |
+
| | SSIM ↑ | | | | | |
|
285 |
+
|----------------|------------------------|-----------|-----------|-----------|-----------|-----------|
|
286 |
+
| | number of measurements | | | | | |
|
287 |
+
| 1 | 10 | 100 | 300 | 500 | | |
|
288 |
+
| (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | | |
|
289 |
+
| Non-Bayes | Canonical | 0.39±0.12 | 0.56±0.04 | 2.20±0.04 | 3.75±0.05 | 4.94±0.06 |
|
290 |
+
| Wavelet | 0.40±0.09 | 0.56±0.06 | 2.30±0.06 | 3.90±0.05 | 5.05±0.01 | |
|
291 |
+
| SPCA | 0.45±0.11 | 0.65±0.06 | 2.72±0.06 | 3.52±0.08 | 4.98±0.08 | |
|
292 |
+
| LISTA | 0.96±0.01 | 1.11±0.01 | 3.70±0.01 | 5.36±0.01 | 6.31±0.01 | |
|
293 |
+
| DLISTA | 0.96±0.01 | 1.09±0.01 | 4.01±0.02 | 5.57±0.01 | 6.26±0.01 | |
|
294 |
+
| A-DLISTA (our) | 0.96±0.01 | 1.17±0.01 | 4.79±0.01 | 6.15±0.01 | 6.70±0.01 | |
|
295 |
+
| Bayes | BCS | 0.05±0.01 | 0.60±0.01 | 1.10±0.01 | 4.48±0.02 | 6.23±0.02 |
|
296 |
+
| VLISTA (our) | 0.80±0.03 | 0.94±0.02 | 3.29±0.01 | 4.73±0.01 | 6.02±0.01 | |
|
297 |
+
|
298 |
+
## 5.4 Out Of Distribution Detection
|
299 |
+
|
300 |
+
This section focuses on a crucial distinction between non-Bayesian models and VLISTA for solving inverse linear problems. Unlike any non-Bayesian approach to compressed sensing, VLISTA allows quantifying uncertainties on the reconstructed signals. This means that it can detect out-of-distribution samples without requiring ground truth data during inference. In contrast to other Bayesian techniques that design specific priors to meet the sparsity constraints after marginalization (Ji et al., 2008; Zhou et al., 2014), VLISTA completely overcomes such an issue as the thresholding operations are not affected by the marginalization over dictionaries. To prove that VLISTA can detect OOD samples, we employ the MNIST dataset. First, we split the dataset into two distinct subsets - the In-Distribution (ID) set and the OOD. The ID set comprises images from three randomly chosen digits, while the OOD set includes images of the remaining digits. Then, we partitioned the ID set into training, validation, and test sets for VLISTA. Once the model was trained, Table 2: CIFAR10 SSIM (the higher the better) for different number of measurements. First three rows correspond to "classical" baselines. We highlight in bold the best performance for Bayes and Non-Bayes models.
|
301 |
+
|
302 |
+
| | | SSIM ↑ | | | | |
|
303 |
+
|----------------|------------------------|-----------|-----------|-----------|-----------|-----------|
|
304 |
+
| | number of measurements | | | | | |
|
305 |
+
| | 1 | 10 | 100 | 300 | 500 | |
|
306 |
+
| | (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | |
|
307 |
+
| Non-Bayes | Canonical | 0.17±0.10 | 0.21±0.02 | 0.33±0.02 | 0.47±0.02 | 0.58±0.03 |
|
308 |
+
| Wavelet | 0.23±0.22 | 0.42±0.02 | 1.44±0.06 | 2.52±0.09 | 3.43±0.08 | |
|
309 |
+
| SPCA | 0.31±0.19 | 0.43±0.02 | 1.53±0.04 | 2.66±0.08 | 3.58±0.07 | |
|
310 |
+
| LISTA | 1.34±0.02 | 1.67±0.02 | 3.10±0.01 | 4.20±0.01 | 4.71±0.01 | |
|
311 |
+
| DLISTA | 1.16±0.02 | 1.96±0.02 | 4.50±0.01 | 5.15±0.01 | 5.42±0.01 | |
|
312 |
+
| A-DLISTA (our) | 1.34±0.02 | 1.77±0.02 | 4.74±0.01 | 5.26±0.01 | 5.83±0.01 | |
|
313 |
+
| Bayes | BCS | 0.04±0.01 | 0.48±0.01 | 0.59±0.01 | 1.29±0.01 | 1.91±0.01 |
|
314 |
+
| VLISTA (our) | 0.86±0.03 | 1.25±0.03 | 3.59±0.02 | 4.01±0.01 | 4.36±0.01 | |
|
315 |
+
|
316 |
+
it was tasked with reconstructing images from the ID test and OOD sets. To assess the model's ability to detect OODs, we utilized a *two-sample t-test*. We accomplished that by leveraging the per-pixel variance of the reconstructed ID, {varIDtest;i σpp}
|
317 |
+
P −1 i=0 , and OOD, {varOOD;i σpp}
|
318 |
+
P −1 i=0 , images (with P being the number of pixels). To compute the per-pixel variance, we reconstruct each image 100 times by sampling a different dictionary for each of trial. We then construct the empirical c.d.f. of the per-pixel variance for each image.
|
319 |
+
|
320 |
+
By using the mean of the c.d.f. as a summary statistics, we can apply the *two-sample t-test* to detect OOD samples. We report the results in Figure 4. As a reference p-value for rejecting the null hypothesis about the two variance distributions being the same, we consider a significance level equal to 0.05 (green solid line).
|
321 |
+
|
322 |
+
We conducted multiple tests at different noise levels to assess the robustness of OOD detection to measure noise. For the current task, we used BCS as a baseline. However, due to the different nature of the BCS
|
323 |
+
framework, we utilized a slightly different evaluation procedure to determine its p-values. We employed the same ID and OOD splits as VLISTA but considered the c.d.f. of the reconstruction error the model evaluates.
|
324 |
+
|
325 |
+
The remainder of the process was identical to that of VLISTA.
|
326 |
+
|
327 |
+
## 6 Conclusion
|
328 |
+
|
329 |
+
Our study introduces a novel approach called VLISTA, which combines dictionary learning and sparse recovery into a single variational framework. Traditional compressed sensing methods rely on a known ground truth dictionary to reconstruct signals. Moreover state-ofthe-art LISTA-type of models, typically assume a fixed measurement setup. In our work, we relax both assumptions. First, we propose a soft-thresholding algorithm, termed A-DLISTA,
|
330 |
+
that can handle different sensing matrices. We theoretically justify the use of an augmentation network to adapt the threshold and step size for each layer based on the current input and the learned dictionary. Finally, we propose a probabilistic assumption about the existence of a ground truth dictionary and use it to create
|
331 |
+
|
332 |
+
![10_image_0.png](10_image_0.png)
|
333 |
+
|
334 |
+
Figure 4: p-value for OOD rejection as a function of the noise level. The green line represents a reference p-value equal to 0.05.
|
335 |
+
the VLISTA framework. Our empirical results show that A-DLISTA improves upon performances of classical and ML baselines in a non-static measurement scenario. Although VLISTA does not outperform A-DLISTA, it allows for uncertainty evaluation in reconstructed signals, a valuable feature for detecting out-of-distribution data. In contrast, none of the non-Bayesian models can perform such a task. Unlike other Bayesian approaches, VLISTA does not require specific priors to preserve sparsity after marginalization. Instead, the averaging operation applies to the sparsifying dictionary, not the sparse signal itself.
|
336 |
+
|
337 |
+
## Impact Statement
|
338 |
+
|
339 |
+
This work proposes two new models to jointly solve the dictionary learning and sparse recovery problems, especially concerning scenarios characterized by a varying sensing matrix. We believe the potential societal consequences of our work being chiefly positive, since it might contribute to a larger adoption of LISTA-type of models to applications requiring fast solutions to underdetermined inverse problems, especially concerning varying forward operators. Nonetheless, it is crucial to exercise caution and thoroughly comprehend the behavior of A-DLISTA and VLISTA, as with any other LISTA model, in order to obtain reliable predictions.
|
340 |
+
|
341 |
+
## References
|
342 |
+
|
343 |
+
Aviad Aberdam, Alona Golts, and Michael Elad. Ada-LISTA: Learned Solvers Adaptive to Varying Models.
|
344 |
+
|
345 |
+
IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021.
|
346 |
+
|
347 |
+
Tim Bakker, Herke van Hoof, and Max Welling. Experimental design for mri by greedy policy search.
|
348 |
+
|
349 |
+
Advances in Neural Information Processing Systems, 33:18954–18966, 2020.
|
350 |
+
|
351 |
+
Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
|
352 |
+
|
353 |
+
SIAM journal on imaging sciences, 2(1):183–202, 2009.
|
354 |
+
|
355 |
+
Arash Behboodi, Holger Rauhut, and Ekkehard Schnoor. Compressive Sensing and Neural Networks from a Statistical Learning Perspective. In Gitta Kutyniok, Holger Rauhut, and Robert J. Kunsch (eds.),
|
356 |
+
Compressed Sensing in Information Processing, Applied and Numerical Harmonic Analysis, pp. 247–277.
|
357 |
+
|
358 |
+
Springer International Publishing, Cham, 2022.
|
359 |
+
|
360 |
+
Freya Behrens, Jonathan Sauder, and Peter Jung. Neurally Augmented ALISTA. In *International Conference* on Learning Representations, 2021.
|
361 |
+
|
362 |
+
Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for compressed sensing. *Applied and* Computational Harmonic Analysis, 27(3):265–274, November 2009.
|
363 |
+
|
364 |
+
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In *International conference on machine learning*, pp. 1613–1622. PMLR, 2015.
|
365 |
+
|
366 |
+
Mark Borgerding, Philip Schniter, and Sundeep Rangan. AMP-inspired deep networks for sparse linear inverse problems. *IEEE Transactions on Signal Processing*, 65(16):4293–4308, 2017. Publisher: IEEE.
|
367 |
+
|
368 |
+
Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. Theoretical linear convergence of unfolded ISTA
|
369 |
+
and its practical weights and thresholds. *Advances in Neural Information Processing Systems*, 31, 2018.
|
370 |
+
|
371 |
+
Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. Hyperparameter tuning is all you need for lista.
|
372 |
+
|
373 |
+
Advances in Neural Information Processing Systems, 34, 2021.
|
374 |
+
|
375 |
+
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A
|
376 |
+
recurrent latent variable model for sequential data. *Advances in neural information processing systems*, 28, 2015.
|
377 |
+
|
378 |
+
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
|
379 |
+
|
380 |
+
Chris Cremer, Xuechen Li, and David Duvenaud. Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, pp. 1078–1086. PMLR, 2018.
|
381 |
+
|
382 |
+
Ingrid Daubechies, Michel Defrise, and Christine De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 57(11):1413–1457, 2004.
|
383 |
+
|
384 |
+
Geoffrey M. Davis, Stephane G. Mallat, and Zhifeng Zhang. Adaptive time-frequency decompositions. *Optical* Engineering, 33(7):2183–2191, July 1994.
|
385 |
+
|
386 |
+
David L. Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed sensing.
|
387 |
+
|
388 |
+
Proceedings of the National Academy of Sciences, 106(45):18914–18919, November 2009.
|
389 |
+
|
390 |
+
Meire Fortunato, Charles Blundell, and Oriol Vinyals. Bayesian Recurrent Neural Networks. arXiv:1704.02798
|
391 |
+
[cs, stat], May 2019. URL http://arxiv.org/abs/1704.02798. arXiv: 1704.02798.
|
392 |
+
|
393 |
+
Simon Foucart and Holger Rauhut. *A Mathematical Introduction to Compressive Sensing*. Applied and Numerical Harmonic Analysis. Springer New York, New York, NY, 2013.
|
394 |
+
|
395 |
+
Raja Giryes, Yonina C. Eldar, Alex M. Bronstein, and Guillermo Sapiro. Tradeoffs between convergence speed and reconstruction accuracy in inverse problems. *IEEE Transactions on Signal Processing*, 66(7):
|
396 |
+
1676–1690, 2018. Publisher: IEEE.
|
397 |
+
|
398 |
+
Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In 27th International Conference on Machine Learning, ICML 2010, 2010.
|
399 |
+
|
400 |
+
Shihao Ji, Ya Xue, and Lawrence Carin. Bayesian Compressive Sensing. IEEE Transactions on Signal Processing, 56(6):2346–2356, June 2008. Conference Name: IEEE Transactions on Signal Processing.
|
401 |
+
|
402 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*,
|
403 |
+
2013.
|
404 |
+
|
405 |
+
Rahul Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. *Proceedings of the AAAI Conference on Artificial Intelligence*, 31(1), Feb. 2017.
|
406 |
+
|
407 |
+
Jialin Liu and Xiaohan Chen. Alista: Analytic weights are as good as learned weights in lista. In International Conference on Learning Representations (ICLR), 2019.
|
408 |
+
|
409 |
+
Jialin Liu, Xiaohan Chen, Zhangyang Wang, and Wotao Yin. ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA. In *International Conference on Learning Representations*, 2019.
|
410 |
+
|
411 |
+
Chris Metzler, Ali Mousavi, and Richard Baraniuk. Learned D-AMP: Principled Neural Network based Compressive Image Recovery. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 30*, pp. 1770–1781. Curran Associates, Inc., 2017.
|
412 |
+
|
413 |
+
Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In *Proceedings of 27th Asilomar Conference on Signals,*
|
414 |
+
Systems and Computers, pp. 40–44 vol.1, November 1993. doi: 10.1109/ACSSC.1993.342465.
|
415 |
+
|
416 |
+
Hamed Pezeshki, Fabio Valerio Massoli, Arash Behboodi, Taesang Yoo, Arumugam Kannan, Mahmoud Taherzadeh Boroujeni, Qiaoyu Li, Tao Luo, and Joseph B Soriaga. Beyond codebook-based analog beamforming at mmwave: Compressed sensing and machine learning methods. In *GLOBECOM*
|
417 |
+
2022-2022 IEEE Global Communications Conference, pp. 776–781. IEEE, 2022.
|
418 |
+
|
419 |
+
Wei Pu, Yonina C. Eldar, and Miguel R. D. Rodrigues. Optimization Guarantees for ISTA and ADMM
|
420 |
+
Based Unfolded Networks. In IEEE International Conference on Acoustics, Speech and Signal Processing
|
421 |
+
(ICASSP), pp. 8687–8691, May 2022.
|
422 |
+
|
423 |
+
Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International* conference on machine learning, pp. 1530–1538. PMLR, 2015.
|
424 |
+
|
425 |
+
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International conference on machine learning*, pp. 1278–1286.
|
426 |
+
|
427 |
+
PMLR, 2014.
|
428 |
+
|
429 |
+
Javier Rodríguez-Fernández, Nuria González-Prelcic, Kiran Venugopal, and Robert W Heath. Frequencydomain compressive channel estimation for frequency-selective hybrid millimeter wave mimo systems. IEEE
|
430 |
+
Transactions on Wireless Communications, 17(5):2946–2960, 2018.
|
431 |
+
|
432 |
+
Ekkehard Schnoor, Arash Behboodi, and Holger Rauhut. Generalization Error Bounds for Iterative Recovery Algorithms Unfolded as Neural Networks. *arXiv:2112.04364*, January 2022.
|
433 |
+
|
434 |
+
P. Sprechmann, A. M. Bronstein, and G. Sapiro. Learning Efficient Sparse and Low Rank Models. *IEEE*
|
435 |
+
Transactions on Pattern Analysis and Machine Intelligence, 37(9):1821–1833, September 2015. ISSN
|
436 |
+
0162-8828.
|
437 |
+
|
438 |
+
Michael E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun):211–244, 2001.
|
439 |
+
|
440 |
+
Kailun Wu, Yiwen Guo, Ziang Li, and Changshui Zhang. Sparse coding with gated learned ista. In International Conference on Learning Representations, 2020.
|
441 |
+
|
442 |
+
yan yang, Jian Sun, Huibin Li, and Zongben Xu. Deep ADMM-Net for Compressive Sensing MRI. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing* Systems, volume 29. Curran Associates, Inc., 2016.
|
443 |
+
|
444 |
+
Tianwei Yin, Zihui Wu, He Sun, Adrian V Dalca, Yisong Yue, and Katherine L Bouman. End-to-end sequential sampling and reconstruction for mri. *arXiv preprint arXiv:2105.06460*, 2021.
|
445 |
+
|
446 |
+
Jian Zhang and Bernard Ghanem. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp.
|
447 |
+
|
448 |
+
1828–1837, 2018.
|
449 |
+
|
450 |
+
Mingyuan Zhou, Haojun Chen, Lu Ren, Guillermo Sapiro, Lawrence Carin, and John Paisley. Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations. In *Advances in Neural Information* Processing Systems, volume 22. Curran Associates, Inc., 2009.
|
451 |
+
|
452 |
+
Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, Lingbo Li, Zhengming Xing, David Dunson, Guillermo Sapiro, and Lawrence Carin. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images. *IEEE Transactions on Image Processing*, 21(1):130–144, January 2012.
|
453 |
+
|
454 |
+
Zhou Zhou, Kaihui Liu, and Jun Fang. Bayesian compressive sensing using normal product priors. *IEEE*
|
455 |
+
Signal Processing Letters, 22(5):583–587, 2014.
|
456 |
+
|
457 |
+
## A Appendix B Theoretical Analysis
|
458 |
+
|
459 |
+
In this section, we report about theoretical motivations for the A-DLISTA design choices. The design is motivated by considering the convergence analysis of LISTA method. We start by recalling a result from Chen et al. (2018), upon which our analysis relies. The authors of Chen et al. (2018) consider the inverse problem y = Ax∗, with x∗ as the ground truth sparse vector, and use the model with each layer given by:
|
460 |
+
|
461 |
+
$$\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\mathbf{W}_{t}^{\mathsf{T}}(\mathbf{y}-\mathbf{A}\mathbf{x}_{t-1})\right),$$
|
462 |
+
$$(16)$$
|
463 |
+
(y − Axt−1), (16)
|
464 |
+
where (Wt, θt) are learnable parameters.
|
465 |
+
|
466 |
+
The following result from Chen et al. (2018, Theorem 2) is adapted to noiseless setting.
|
467 |
+
|
468 |
+
Theorem B.1. Suppose that the iterations of LISTA are given by equation 16, and assume ∥x∗∥∞ ≤ B and
|
469 |
+
∥x∗∥0 ≤ s. There exists a sequence of parameters {Wt, θt} *such that*
|
470 |
+
∥xt − x∗∥2 ≤ sB exp(−ct), ∀t = 1, 2*, . . . ,*
|
471 |
+
for constant c > 0 that depend only on the sensing matrix A *and the sparsity level* s. It is important to note that the above convergence result only assures the *existence* of the parameters that are good for convergence but does not guarantee that the training would necessarily find them. The latter result is in general difficult to obtain.
|
472 |
+
|
473 |
+
The proof in Chen et al. (2018) has two main steps:
|
474 |
+
1. No false positive: the thresholds are chosen such that the entries outside the support of x∗ remain zero. The choice of threshold, among others, depend on the coherence value of Wt and A. We will provide more details below.
|
475 |
+
|
476 |
+
2. Error bounds for x∗: assuming proper choice of thresholds, the authors derive bounds on the recovery error.
|
477 |
+
|
478 |
+
We focus on adapting these steps to our setup. Note that to assure there is no false positive, it is common in classical ISTA literature to start from large thresholds, so the soft thresholding function aggressively maps many entries to zero, and then gradually reduce the threshold value as the iterations progress.
|
479 |
+
|
480 |
+
## B.1 Analysis With Known Ground-Truth Dictionary
|
481 |
+
|
482 |
+
Let's consider the extension of Theorem B.1 to our setup:
|
483 |
+
|
484 |
+
$\downarrow$)] .
|
485 |
+
xt = ηθt xt−1 + γt(ΦkΨt)
|
486 |
+
T(y k − ΦkΨtxt−1). (17)
|
487 |
+
Note that in our case, the weight Wt is replaced with γt(ΦkΨt) with learnable Ψt and γt. Besides, the matrix A is replaced by ΦkΨt, and the forward model is given by y k = ΦkΨox∗. The sensing matrix Φk can change across samples, hence the dependence on the sample index k.
|
488 |
+
|
489 |
+
If the learned dictionary Ψt is equal to Ψo, the layers of our model are equal to classical iterative softthresholding algorithms with learnable step-size γt and threshold θt.
|
490 |
+
|
491 |
+
There are many convergence results in the literature, for example see Daubechies et al. (2004). We can use convergence analysis of iterative soft thresholding algorithms using the mutual coherence similar to Chen et al. (2018); Behrens et al. (2021). As a reminder, the mutual coherence of the matrix M is defined as:
|
492 |
+
|
493 |
+
$$\mu(M):=\operatorname*{max}_{1\leq i\neq j\leq N}\left|M_{i}^{\top}M_{j}\right|,$$
|
494 |
+
, (18)
|
495 |
+
where Miis the i'th column of M.
|
496 |
+
|
497 |
+
$$(17)$$
|
498 |
+
$\left(18\right)$.
|
499 |
+
The convergence result requires that the mutual coherence µ(ΦkΨo) be sufficiently small, for example in order of 1/(2s) with s the sparsity, and the matrix ΦkΨo is column normalized, i.e.,(ΦkΨo)2
|
500 |
+
= 1. Then the step size can be chosen equal to one, i.e., γt = 1. The thresholds θt are chosen to avoid false positive using a similar schedule mentioned above, that is, first starting with a large threshold θ0 and then gradually decreasing it to a certain limit. We do not repeat the derivations, and interested readers can refer to Daubechies et al. (2004);
|
501 |
+
Behrens et al. (2021); Chen et al. (2018) and references therein.
|
502 |
+
|
503 |
+
Remark B.2. When the dictionary Ψo is known, we can adapt the algorithm to the varying sensing matrix Φk by first normalizing the column ΦkΨo. What is important to note is that the threshold choice is a function the mutual coherence of the sensing matrix. So with each new sensing matrix, the thresholds should be adapted following the mutual coherence value. This observation partially justifies the choice of thresholds as a function of the dictionary and the sensing matrix, hence the augmentation network.
|
504 |
+
|
505 |
+
## B.2 Analysis With Unknown Dictionary
|
506 |
+
|
507 |
+
We now move to the scenario where the dictionary is itself learned, and not known in advance.
|
508 |
+
|
509 |
+
Consider the layer t of DLISTA with the sensing matrix Φk, and define the following parameters:
|
510 |
+
|
511 |
+
$$\tilde{\mu}(t,\mathbf{\Phi}^{k}):=\max_{1\leq i\neq j\leq N}\left|((\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})_{j}\right|$$ $$\tilde{\mu}_{2}(t,\mathbf{\Phi}^{k}):=\max_{1\leq i,j\leq N}\left|((\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}^{k}(\mathbf{\Psi}_{t}-\mathbf{\Psi}_{o}))_{j}\right|$$ $$\delta(\gamma,t,\mathbf{\Phi}^{k}):=\max_{i}\left|1-\gamma\left|\|(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})_{i}\right|\right|_{2}^{2}\right|$$
|
512 |
+
(19) $\binom{20}{2}$ .
|
513 |
+
$$(21)$$
|
514 |
+
|
515 |
+
Some comments are in order:
|
516 |
+
- The term µ˜ is the **mutual coherence** of the matrix ΦkΨt.
|
517 |
+
|
518 |
+
- The term µ˜2 is closely connected to **generalized mutual coherence**, however, it differs in that unlike generalized mutual coherence, it includes the diagonal inner product for i = j. It captures the effect of mismatch with ground-truth dictionary.
|
519 |
+
|
520 |
+
- Finally, the term δ(·) is reminiscent of restricted isometry property (RIP) constant (Foucart &
|
521 |
+
Rauhut, 2013), a key condition for many recovery guarantees in compressed sensing. When the columns of the matrix ΦkΨt is normalized, the choice of γ = 1 yield δ(*γ, t,* Φk) = 0.
|
522 |
+
|
523 |
+
For the rest of the paper, for simplicity, we only kept the dependence on γ in the notation and dropped the dependence of µ, ˜ µ˜2 and δ on t, Φk and Ψt.
|
524 |
+
|
525 |
+
Proposition B.3. *Suppose that* y k = ΦkΨox∗ with the support *supp*(x∗) = S. For DLISTA iterations give as
|
526 |
+
|
527 |
+
$$\mathbf{x}_{t}=\eta_{\theta_{t}}\left(\mathbf{x}_{t-1}+\gamma_{t}(\mathbf{\Phi}^{k}\mathbf{\Psi}_{t})^{T}(\mathbf{y}^{k}-\mathbf{\Phi}^{k}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})\right),\tag{1}$$
|
528 |
+
|
529 |
+
we have:
|
530 |
+
1. If for all t, the pairs (θt, γt, Ψt) *satisfy*
|
531 |
+
|
532 |
+
$$(22)$$
|
533 |
+
$$\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t},$$
|
534 |
+
) ≤ θt, (23)
|
535 |
+
then there is no false positive in each iteration. In other words, for all t, we have supp(xt) ⊆ *supp*(x∗).
|
536 |
+
|
537 |
+
2. *Assuming that the conditions of the last step hold, then we get the following bound on the error:*
|
538 |
+
|
539 |
+
$\|\mathbf{x}_{t}-\mathbf{x}_{*}\|_{1}\leq\left(\delta(\gamma_{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)\|\mathbf{x}_{t-1}-\mathbf{x}_{*}\|_{1}+\gamma_{t}\tilde{\mu}_{2}|S|\,\|\mathbf{x}_{*}\|_{1}+|S|\theta_{t}$.
|
540 |
+
$$(23)$$
|
541 |
+
|
542 |
+
## B.2.1 Guidelines From Proposition.
|
543 |
+
|
544 |
+
We remark on some of the guidelines we can get from the above result.
|
545 |
+
|
546 |
+
- **Thresholds.** Similar to the discussion in previous sections, there are thresholds such that, there is no false positive at each layer. The choice of θt is a function of γt and, through coherence terms, Φk and Ψt. Since Φkchanges for each sample k, we learn a neural network that yields this parameter as a function of Φk and Ψt.
|
547 |
+
|
548 |
+
- **Step size.** The step size γt can be chosen to control the error decay. Ideally, we would like to have the term (δ(γt) + γtµ˜(|S| − 1)) to be strictly smaller than one. In particular, γt directly impacts δ(γt), also a function of Φk and Ψt. We can therefore consider γt as a function of Φk and Ψt, which hints at the augmentation neural network we introduced for giving γt as a function of those parameters.
|
549 |
+
|
550 |
+
Remarks on Convergence. One might wonder if the convergence is possible given the bound on the error.
|
551 |
+
|
552 |
+
We try to sketch a scenario where this can happen. First, note that once we have chosen γt, and given Φk and Ψt, we can select θt using the condition 23. Also, if the network gradually learns the ground truth dictionary at later stages, the term µ˜2 vanishes. We need to choose the term γt carefully such that the term
|
553 |
+
(δ(γt) + γtµ˜(|S| − 1)) is smaller than one. Similar to ISTA analysis, we would need to assume bounds on the mutual coherence µ˜ and the column norm for ΦkΨo. With standard assumptions, sketched above as well, the error gradually decreases per iteration, and we can reuse the convergence results of ISTA. We would like to emphasize that this is a heuristic argument, and there is no guarantee that the training yields a model with the parameters in accordance with these guidelines. Although we show experimentally that the proposed methods provide the promised improvements.
|
554 |
+
|
555 |
+
## B.3 Proof Of Proposition B.3
|
556 |
+
|
557 |
+
In what follows, we provide the derivations for Proposition B.3.
|
558 |
+
|
559 |
+
Convergence proofs of ISTA type models involve two steps in general. First, it is investigated how the support is found and locked in, and second how the error shrinks at each step. We focus on these two steps, which matter mainly for our architecture design. Our analysis is similar in nature to Chen et al. (2018); Aberdam et al. (2021), however it differs from Aberdam et al. (2021) in considering unknown dictionaries and from Chen et al. (2018) in both considered architecture and varying sensing matrix. In what follows, we consider noiseless setting. However, the results can be extended to noisy setups by adding additional terms containing noise norm similar to Chen et al. (2018). We make following assumptions:
|
560 |
+
1. There is a ground-truth (unknown) dictionary Ψo such that s∗ = Ψox∗.
|
561 |
+
|
562 |
+
2. As a consequence, y k = ΦkΨox∗.
|
563 |
+
|
564 |
+
3. We assume that x∗ is sparse with its support contained in S. In other words: xi,∗ = 0 for i /∈ S.
|
565 |
+
|
566 |
+
To simplify the notation, we drop the index k, which indicates the varying sensing matrix, from Φk and y k, and use Φ and y for the rest. We break the proof to two lemma, each proving one part of Proposition B.3.
|
567 |
+
|
568 |
+
## B.3.1 Proof - Step 1: No False Positive Condition
|
569 |
+
|
570 |
+
The following lemma focuses on assuring that we do not have false positive in support recovery after each iteration of our model. In other words, the models continues updating only the entries in the support and keep the entries outside the support zero.
|
571 |
+
|
572 |
+
Lemma B.4. Suppose that the support of x∗ *is given as supp*(x∗) = S*. Consider iterations given by:*
|
573 |
+
xt = ηθt xt−1 + γt(ΦΨt)
|
574 |
+
⊤(y − ΦΨtxt−1),
|
575 |
+
with x0 = 0*. If we have for all* t = 1, 2*, . . .* :
|
576 |
+
|
577 |
+
$$\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t},$$
|
578 |
+
$$(24)$$
|
579 |
+
|
580 |
+
then there will be no false positive, i.e., xt,i = 0 for ∀i /∈ S, ∀t.
|
581 |
+
|
582 |
+
Proof. We prove this by induction. Since x0 = 0, the induction base is trivial. Suppose that the support of xt−1 is already included in that of x∗, namely supp(xt−1) ⊆ supp(x∗) = S. Consider i ∈ S
|
583 |
+
c. We have
|
584 |
+
|
585 |
+
$$x_{t,i}=\eta_{\theta_{t}}\left(\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(\mathbf{y}-\Phi\Psi_{t}\mathbf{x}_{t-1})\right).\tag{1}$$
|
586 |
+
|
587 |
+
To avoid false positives, we need to guarantee that for i /∈ S:
|
588 |
+
|
589 |
+
$$\eta_{\theta_{t}}\left(\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(y-\Phi\Psi_{t}x_{t-1})\right)=0\implies\left|\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(y-\Phi\Psi_{t}x_{t-1})\right|\leq\theta_{t},$$
|
590 |
+
which means that the soft-thresholding function will have zero output for these entries. First note that:
|
591 |
+
((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtxt−1) ≤((ΦΨt)i) ⊤Φ(Ψtx∗ − Ψtxt−1) +((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtx∗) (26) = X j∈S ((ΦΨt)i) ⊤(ΦΨt)j (x∗,j − xt−1,j ) +((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtx∗) (27)
|
592 |
+
We can bound the first term by:
|
593 |
+
|
594 |
+
$$\left|\sum_{j\in S}((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}\mathbf{\Psi}_{t})_{j}(\mathbf{x}_{*,j}-\mathbf{x}_{t-1,j})\right|\leq\sum_{j\in S}\left|((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}\mathbf{\Psi}_{t})_{j}\right|\left|(\mathbf{x}_{*,j}-\mathbf{x}_{t-1,j})\right|$$ $$\leq\hat{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1},$$
|
595 |
+
$$(25)$$
|
596 |
+
$$(26)$$
|
597 |
+
$$(27)$$
|
598 |
+
(28) $\binom{29}{2}$ (30) .
|
599 |
+
where we use the definition of mutual coherence for the upper bound. The last term is bounded by
|
600 |
+
|
601 |
+
$$\left|((\Phi\Psi_{t})_{i})^{\top}\Phi(\Psi_{o}\mathbf{x}_{*}-\Psi_{t}\mathbf{x}_{*})\right|=\left|\sum_{j\in S}((\Phi\Psi_{t})_{i})^{\top}(\Phi(\Psi_{o}-\Psi_{t}))_{j}x_{j,*}\right|$$ $$\leq\sum_{j\in S}\left|((\Phi\Psi_{t})_{i})^{\top}(\Phi(\Psi_{o}-\Psi_{t}))_{j}\right|\left|x_{j,*}\right|$$ $$\leq\hat{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}.$$
|
602 |
+
$$(31)$$
|
603 |
+
$\square$
|
604 |
+
Therefore, we get
|
605 |
+
$\gamma_{t}((\Phi\Psi_{t})_{i})^{\top}(\mathbf{y}-\Phi\Psi_{t}\mathbf{x}_{t-1})\big{|}\leq\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)$.
|
606 |
+
The following choice guarantees that there is no false positive:
|
607 |
+
$$\gamma_{t}\left(\tilde{\mu}\left\|\mathbf{x}_{*}-\mathbf{x}_{t-1}\right\|_{1}+\tilde{\mu}_{2}\left\|\mathbf{x}_{*}\right\|_{1}\right)\leq\theta_{t}.$$
|
608 |
+
) ≤ θt. (31)
|
609 |
+
|
610 |
+
## B.3.2 Proof - Step 2: Controlling The Recovery Error
|
611 |
+
|
612 |
+
The previous lemma provided the conditions such that there is no false positive. We see under which conditions the model can reduce the error inside the support S.
|
613 |
+
|
614 |
+
Lemma B.5. Suppose that the threshold parameter θt has been chosen such that there is no false positive after each iteration. We have:
|
615 |
+
|
616 |
+
$\|\mathbf{x}_{t}-\mathbf{x}_{*}\|_{1}\leq\left(\delta(\gamma_{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)\|\mathbf{x}_{t-1}-\mathbf{x}_{*}\|_{1}+\gamma_{t}\tilde{\mu}_{2}|S|\,\|\mathbf{x}_{*}\|_{1}+|S|\theta_{t}$.
|
617 |
+
Proof. For i ∈ S, we have:
|
618 |
+
|
619 |
+
$$|x_{t,i}-x_{*,i}|\leq\left|x_{t-1,i}+\gamma_{i}((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\mathsf{T}}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})-x_{*,i}\right|+\theta_{i}.\tag{32}$$ At the iteration $t$ for $i\in S$, we can separate the dictionary mismatch and the rest of the error as follows:
|
620 |
+
$x_{t-1,i}+\gamma_{t}((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{y}-\mathbf{\Phi}\mathbf{\Psi}_{t}\mathbf{x}_{t-1})=$ $x_{t-1,i}+\gamma_{t}((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}(\mathbf{\Phi}\mathbf{\Psi}_{t})_{j}(\mathbf{x}_{*,j}-\mathbf{x}_{t-1,j})+((\mathbf{\Phi}\mathbf{\Psi}_{t})_{i})^{\top}\mathbf{\Phi}(\mathbf{\Psi}_{o}\mathbf{x}_{*}-\mathbf{\Psi}_{t}\mathbf{x}_{*}))$.
|
621 |
+
We can decompose the first part further as:
|
622 |
+
|
623 |
+
The first part of the $\mathbf{x}$ is: $$\begin{aligned} x_{t-1,i} & + \gamma_t \sum_{j \in S} ((\mathbf{\Phi} \mathbf{\Psi}_t)_i)^\top (\mathbf{\Phi} \mathbf{\Psi}_t)_j (x_{*,j} - \mathbf{x}_{t-1,j}) = \\ & (\mathbf{I} - \gamma_t (\mathbf{\Phi} \mathbf{\Psi}_t)_i)^\top (\mathbf{\Phi} \mathbf{\Psi}_t)_i) \mathbf{x}_{t-1,i} + \gamma_t (\mathbf{\Phi} \mathbf{\Psi}_t)_i)^\top (\mathbf{\Phi} \mathbf{\Psi}_t)_i) \mathbf{x}_{*,i} \\ & + \gamma_t \sum_{j \in S,j \neq i} ((\mathbf{\Phi} \mathbf{\Psi}_t)_i)^\top (\mathbf{\Phi} \mathbf{\Psi}_t)_j (\mathbf{x}_{*,j} - \mathbf{x}_{t-1,j}). \end{aligned}$$
|
624 |
+
j∈S,j̸=i Using triangle inequality for the previous decomposition we get: xt−1,i + γt((ΦΨt)i) ⊤(y − ΦΨtxt−1) − x∗,i ≤(1 − γt(ΦΨt)i) ⊤(ΦΨt)i))(xt−1,i − x∗,i) + γt X j∈S,j̸=i ((ΦΨt)i) ⊤(ΦΨt)j (x∗,j − xt−1,j ) + γt ((ΦΨt)i) ⊤Φ(Ψox∗ − Ψtx∗)) ≤ δ(γt)|(zt−1,i − z∗,i)| + γtX j∈S,j̸=i µ˜ |x∗,j − xt−1,j | + γtµ˜2 ∥x∗∥1 It suffices to sum up the errors and combine previous inequalities to get:
|
625 |
+
$\text{abine previous inequalities to get:}$ .
|
626 |
+
$$\square$$
|
627 |
+
$$\left\|\mathbf{x}_{S,t}-\mathbf{x}_{*}\right\|_{1}=\sum_{i\in S}\left|\mathbf{x}_{t,i}-\mathbf{x}_{*,i}\right|\leq$$ $$\leq\left(\delta(\gamma_{t})+\gamma_{t}\tilde{\mu}(|S|-1)\right)\left\|\mathbf{x}_{S,t-1}-\mathbf{x}_{*}\right\|_{1}+\gamma_{t}\tilde{\mu}_{2}|S|\left\|\mathbf{x}_{*}\right\|_{1}+|S|\theta_{t}.$$
|
628 |
+
Since we assumed there is no false positive, we get the final result: $$\|\mathbf{x}_{t}-\mathbf{x}_{*}\|_{1}=\sum_{i\in S}|\mathbf{x}_{t,i}-\mathbf{x}_{*,i}|\leq(\delta(\gamma_{t})+\gamma_{t}\hat{\mu}(|S|-1))\,\|\mathbf{x}_{t-1}-\mathbf{x}_{*}\|_{1}+\gamma_{t}\hat{\mu}_{2}|S|\,\|\mathbf{x}_{*}\|_{1}+|S|\theta_{t}.$$
|
629 |
+
|
630 |
+
## C Implementation Details
|
631 |
+
|
632 |
+
In this section we report details concerning the architecture of A-DLISTA and VLISTA.
|
633 |
+
|
634 |
+
## C.1 A-Dlista (Augmentation Network)
|
635 |
+
|
636 |
+
As previously stated in the main paper (subsection 4.1), A-DLISTA consists of two architectures: the DLISTA
|
637 |
+
model (blue blocks in Figure 1) representing the unfolded version of the ISTA algorithm with parametrized Ψ, and the augmentation (or adaptation) network (red blocks in Figure 1). At a given reconstruction layer t, the augmentation model takes the measurement matrix Φi and the dictionary Ψt as input and generates the parameters {γt, θt} for the current iteration. The architecture for the augmentation network is illustrated in Figure 5, which shows a feature extraction section and two output branches, one for each generated parameter. To ensure that the estimated {γt, θt} parameters are positive, each branch is equipped with a softplus function. As noted in the main paper, the weights of the augmentation model are shared across all A-DLISTA layers.
|
638 |
+
|
639 |
+
![19_image_0.png](19_image_0.png)
|
640 |
+
|
641 |
+
Figure 5: Augmentation model's architecture for A-DLISTA.
|
642 |
+
|
643 |
+
![20_image_0.png](20_image_0.png)
|
644 |
+
|
645 |
+
![20_image_1.png](20_image_1.png)
|
646 |
+
|
647 |
+
Figure 6: Left: prior network architecture. Right: posterior network architecture. For the posterior model,
|
648 |
+
|
649 |
+
![20_image_2.png](20_image_2.png)
|
650 |
+
|
651 |
+
we show the output shape from each of the three input heads in the figure. Such a structure is necessary since the posterior model accepts three quantities as input: observations, the sensing matrix, and the reconstruction from the previous layer. Different shapes characterize these quantities. The letter "B" indicates batch size.
|
652 |
+
|
653 |
+
## C.2 Vlista
|
654 |
+
|
655 |
+
As described in subsection 4.2 of the meain paper, VLISTA comprises three different components: the likelihood, and the prior and posterior models.
|
656 |
+
|
657 |
+
## C.2.1 Vlista - Likelihood Model
|
658 |
+
|
659 |
+
The likelihood model (subsection 4.2) represents a Gaussian distribution with a mean value parametrized using the A-DLISTA model. There is. However, a fundamental difference between the likelihood model and the A-DLISTA architecture is presented in subsection 4.1. Indeed, unlike the latter, the likelihood model of VLISTA does not learn the dictionary using backpropagation. Instead, it uses the dictionary sampled from the posterior distribution.
|
660 |
+
|
661 |
+
## C.2.2 Vlista - Posterior & Prior Models
|
662 |
+
|
663 |
+
We report in Figure 6 the prior (left image) and the posterior (right image) architectures. We implement both models using an encoder-decoder scheme based on convolutional layers. The prior network comprises two convolutional layers followed by two separate branches dedicated to generating the mean and variance of the Gaussian distribution subsection 4.2. We use the dictionary sampled at the previous iteration as input for the prior. In contrast to the prior, the posterior network accepts three different quantities as input: the sensing matrix, the observations, and the reconstructed sparse vector from the previous iteration. To process the three inputs together, the posterior accounts for three separated "input" layers followed by an aggregation step. Subsequently, two branches are used to generate the mean and the standard deviation of the Gaussian distribution of the dictionary subsection 4.2.
|
664 |
+
|
665 |
+
We offer the reader a unified overview of our variational model in Figure 7.
|
666 |
+
|
667 |
+
![21_image_0.png](21_image_0.png)
|
668 |
+
|
669 |
+
Figure 7: VLISTA iterations' schematic view.
|
670 |
+
|
671 |
+
## D Training Details
|
672 |
+
|
673 |
+
This section provides details regarding the training of the A-DLISTA and VLISTA models. Using the Adam optimizer, we train the reconstruction and augmentation models for A-DLISTA jointly. We set the initial learning rate to 1.e−2 and 1.e−3for the reconstruction and augmentation network, respectively, and we drop its value by a factor of 10 every time the loss stops improving. Additionally, we set the weight decay to 5.e−4 and the batch size to 128. We use Mean Squared Error (MSE) as the objective function for all datasets. We train all the components of VLISTA using the Adam optimizer, which is similar to A-DLISTA. We set the learning rate to 1.e−3 and drop its value by a factor of 10 every time the loss stops improving. Regarding the objective function, we maximize the ELBO and set the weight for KL divergence to 1.e−3. We report in Equation 33 the ELBO derivation.
|
674 |
+
|
675 |
+
logp(x1:T = x
|
676 |
+
i
|
677 |
+
gt|y
|
678 |
+
i, Φi) = log Zp(x1:T = x
|
679 |
+
i
|
680 |
+
gt|Ψ1:T , y
|
681 |
+
i, Φi)p(Ψ1:T )dΨ1:T (33)
|
682 |
+
$$(33)$$
|
683 |
+
= log Zp(x1:T = x
|
684 |
+
i
|
685 |
+
gt|Ψ1:T , y
|
686 |
+
i, Φi)p(Ψ1:T )q(Ψ1:T |x1:T , y
|
687 |
+
i, Φi)
|
688 |
+
q(Ψ1:T |x1:T , yi, Φi)dΨ1:T
|
689 |
+
≥
|
690 |
+
Zq(Ψ1:T |x1:T , y
|
691 |
+
i, Φi) log
|
692 |
+
p(x1:T = x
|
693 |
+
i
|
694 |
+
gt|Ψ1:T , y
|
695 |
+
i, Φi)p(Ψ1:T )
|
696 |
+
q(Ψ1:T |x1:T , yi, Φi)dΨ1:T
|
697 |
+
=
|
698 |
+
Zq(Ψ1:T |x1:T , y
|
699 |
+
i, Φi) log p(x1:T = x
|
700 |
+
i
|
701 |
+
gt|Ψ1:T , y
|
702 |
+
i, Φi)dΨ1:T
|
703 |
+
+
|
704 |
+
Zq(Ψ1:T |x1:T , y
|
705 |
+
i, Φi) log p(Ψ1:T )
|
706 |
+
q(Ψ1:T |x1:T , yi, Φi)
|
707 |
+
dΨ1:T
|
708 |
+
=
|
709 |
+
X
|
710 |
+
T
|
711 |
+
t=1
|
712 |
+
EΨ1:t∼q(Ψ1:t|x0:t−1,yi,Φi)
|
713 |
+
hlog p(xt = x
|
714 |
+
i
|
715 |
+
gt|Ψ1:t, y
|
716 |
+
i, Φi)
|
717 |
+
i
|
718 |
+
−
|
719 |
+
X
|
720 |
+
T
|
721 |
+
t=2
|
722 |
+
EΨ1:t−1∼q(Ψ1:t−1|xt−1,yi,Φi)
|
723 |
+
hDKLq(Ψt|xt−1, y
|
724 |
+
i, Φi) ∥ p(Ψt|Ψt−1)
|
725 |
+
i
|
726 |
+
− DKLq(Ψ1|x0) ∥ p(Ψ1)
|
727 |
+
|
728 |
+
Note that in Equation 33, we consider the same ground truth, x i gt, for each iteration t ∈ [1, T].
|
729 |
+
|
730 |
+
## E Computational Complexity
|
731 |
+
|
732 |
+
This section provides a complexity analysis of the models utilized in our research. Table 3 displays the number of trainable parameters and average inference time for each model, while Table 4 showcases the MACs count.
|
733 |
+
|
734 |
+
To better understand the quantities appearing in Table 4, we have summarized their meaning in Table 5.
|
735 |
+
|
736 |
+
The average inference time was estimated by testing over 1000 batches containing 32 data points using a GeForce RTX 2080 Ti.
|
737 |
+
|
738 |
+
Trainable Parameters and Average Inference Time. To compute the values in Table 3, we considered the architectures used in the main corpus of the paper, e.g. same number of layers. From Table 3, it's worth noting that although ISTA appears to have the longest inference time, that can be attributed to the cost of computing the spectral norm of the matrix A = ΦΨ. Such an operation, can consume up to 98% of the total inference time. Interestingly, neither NALISTA nor A-DLISTA require the computation of the spectral norm as they dynamically generate the *step size*. Additionally, LISTA does not require it at all. NALISTA
|
739 |
+
and A-DLISTA have comparable inference times due to the similarity of their operations, whereas LISTA is the fastest model, whilst VLISTA has a higher average inference time given the use of the posterior model and the sampling procedure. Interestingly, LISTA and A-DLISTA have a comparable number of trainable parameters, while NALISTA has significantly fewer. However, it's essential to emphasize that the number of trainable parameters depends on the problem setup, such as the number of measurements and atoms.
|
740 |
+
|
741 |
+
We use the same experimental setup described in the main paper, which includes 500 measurements, 1024 atoms, and three layers for each ML model. As outlined in the main paper, the likelihood model for VLISTA
|
742 |
+
is similar in architecture to A-DLISTA, as reflected in the MACs count shown in Table 4. However, the likelihood model of VLISTA has a different number of trainable parameters compared to A-DLISTA. Such a dufference is due to VLISTA sampling its dictionary from the posterior rather than training it like A-DLISTA.
|
743 |
+
|
744 |
+
Despite this difference, the time required for the likelihood model (shown in Table 3) is comparable to that of A-DLISTA. It's important to note that the inference time for the likelihood is reported "per iteration", so we must multiply it by the number of layers A-DLISTA uses to make a fair comparison.
|
745 |
+
|
746 |
+
Macs count. Our attention now turns to the MACs count for the A-DLISTA augmentation network. As shown in Table 4, the count is upper bounded by HWK2 + BP. Note that the height and width of the input are halved after each convolutional layer, while the input and output channels are always one, and the kernel size equals three for each layer (see details in Figure 5). To obtain the upper bound for the
|
747 |
+
|
748 |
+
| Parameters (M) | Average Inference Time (ms) | | | |
|
749 |
+
|----------------------------------|-----------------------------------------|-----------------------------------|-----------------------------------------|----|
|
750 |
+
| | meas. = 10 | meas. = 500 | | |
|
751 |
+
| ISTA | 0.00 | 54.0±0.6 (norm: 41.5±0.6) | (1.55±0.02)e 3 (norm.: 3 ) (1.53±0.02)e | |
|
752 |
+
| NALISTA1 | 3.33e −1 | 5.8±0.2 | 7.0±0.3 | |
|
753 |
+
| LISTA | 3.15 | 1.1±0.1 | 1.5±0.3 | |
|
754 |
+
| A-DLISTA2 | 3.15 (Aug. NN: 3.11e −3 ) | 8.2±0.3 | 9.1±0.5 | |
|
755 |
+
| VLISTA | 3.13† | 19.7 † | † | |
|
756 |
+
| | ±0.4 | 21.3 ±0.4 | | |
|
757 |
+
| VLISTA | Prior Model | 1.10 | − | − |
|
758 |
+
| Posterior Model | 2.08 | 3.6 ‡ | ‡ | |
|
759 |
+
| | ±0.2 | 4.1 ±0.2 | | |
|
760 |
+
| | ‡ | | | |
|
761 |
+
| Likelihood Model | 1.05 | 2.7 ±0.2 | 3.05‡ ±0.2 | |
|
762 |
+
| 1 LSTM hidden size equal to 256; | 2 Each layer learns its own dictionary; | † Full model at inference - Prior | | |
|
763 |
+
|
764 |
+
Table 3: Number of trainable parameters (Millions) and Average inference time (milliseconds) for different models. Concerning the inference time, we report the average value with its error considering 10 and 50 measurements setups.
|
765 |
+
|
766 |
+
MACs count, we set H = max i(Hi) = H*input*/2 i and W = max i(Wi) = W*input*/2 i, where Hi and Wi are the height and width at the output of the i-th convolutional layer, respectively. With that in mind, we can upper bounds the MACs count for the convolutional part of the network by HWK2. The convonlutional backbone is followed by two linear layers (see details in Figure 5). The first linear layer takes a vector of size B ∈ R
|
767 |
+
Hinput/16×W*input*/16 as input and outputs a vector of length P = 25. Finally, this vector is fed into two heads, each generating a scalar. Therefore, the overall upper bound for the MACs count for the augmentation network is O(HWK2 + BP + P) = O(HWK2 + P(B + 1)) = O(HWK2 + BP), with the factor +1 dropped.
|
768 |
+
|
769 |
+
Similar reasoning applies to the prior and posterior models of VLISTA, where we estimate the MACs count by multiplying the MACs for the most expensive layers by the total number of layers of the same type.
|
770 |
+
|
771 |
+
## F Additional Results
|
772 |
+
|
773 |
+
In this section we report additional experimental results. In subsection F.1 we report results concerning a fix measurement setup, i.e. Φi → Φ, while in subsection F.2 we show reconstructed images for different classical baselines.
|
774 |
+
|
775 |
+
Table 4: MACs count. Concerning VLISTA, we report the MACs count for each of its component: the Prior, the Posterior, and the Likelihood models.
|
776 |
+
|
777 |
+
| | MACs | |
|
778 |
+
|-----------------------------------------------|----------------------------------------------------------|-------------|
|
779 |
+
| ISTA | O(DM(2L + N) + D2M) | |
|
780 |
+
| NALISTA | O(DM(2L + N) + [4h(d + h) + h 2 ] † ) | |
|
781 |
+
| LISTA | O(LD(N + D) + LMN) | |
|
782 |
+
| A-DLISTA | O(LDMN + [HWK2 + BP] † ) i pr(K2 o 2 prC pr + 2T pr) o i | 2 |
|
783 |
+
| VLISTA | Prior Model | O(3LHprWprC |
|
784 |
+
| Posterior Model | O(L(D + M + DCi poC po + L poS + 10HpoWpoT po)) | |
|
785 |
+
| Likelihood Model | O(LDMN + [HWK2 + BP] † ) | |
|
786 |
+
| † Contribution from the augmentation network. | | |
|
787 |
+
|
788 |
+
| M | Number of measurements |
|
789 |
+
|------------|---------------------------------------------------------------------------------------------|
|
790 |
+
| N | Dimensionality of dictionary's atoms |
|
791 |
+
| D | Number of atoms |
|
792 |
+
| L | Number of layers |
|
793 |
+
| h; d | Hidden and input size for the LSTM |
|
794 |
+
| H; W | Height and width of A-DLISTA augmentation network's input |
|
795 |
+
| K | Kernel size for the Conv layers of A-DLISTA augmentation network |
|
796 |
+
| B; P | Input and output size of the linear layer of A-DLISTA augmentation network |
|
797 |
+
| i | o |
|
798 |
+
| C po; C po | Input and output channels for the "Φ-input" head of the posterior model |
|
799 |
+
| L i po; S | Posterior model bottleneck input and output sizes |
|
800 |
+
| Hpo; Wpo; | Height and width of posterior model's transposed convolutions input |
|
801 |
+
| Tpo | Kernel size of the posterior model's transposed convolutions |
|
802 |
+
| Hpr; Wpr | Input and output sizes of convolutional (and transposed conv.) layers of the prior model |
|
803 |
+
| i | o |
|
804 |
+
| C pr; C pr | Input and output channels of convolutional (and transposed conv.) layers of the prior model |
|
805 |
+
| Kpr; Tpr | kernel size for convolutions and transpose convolutions of the prior model |
|
806 |
+
|
807 |
+
Table 5: Description of quantities appearing in Table 4.
|
808 |
+
|
809 |
+
## F.1 Fixed Sensing Matrix
|
810 |
+
|
811 |
+
We provide in Table 6 and Table 7 results considering a fixed measurement scenario, i.e. using a single sensing matrix Φ. Comparing these results to Table 1 and Table 2, we notice the following. To begin with, LISTA and A-DLISTA perform better compared to the set up in which we use a varying sensing matrix
|
812 |
+
(see section 5). We should expect such behaviour given that we simplified the problem by fixing the Φ matrix.
|
813 |
+
|
814 |
+
Additionally, as we mentioned in the main paper, ALISTA and NALISTA exhibit high performances (superior to other models when 300 and 500 measurements are considered). Such a result is expected, given that these two models were designed for solving inverse problems in a fixed measurement scenario. Furthermore, the results in Table 6 and Table 7 support our hypothesis that the convergence issues we observe in the varying sensing matrix setup are likely related to the "inner" optimization that ALISTA and NALISTA require to evaluate the "W" matrix.
|
815 |
+
|
816 |
+
Table 6: MNIST SSIM (the higher the better) for a different number of measurements with **fixed sensing**
|
817 |
+
matrix, i.e., Φi → Φ. We highlight in bold the best performance. Note that whenever there is agreement within the error for the best performances, we highlight all of them.
|
818 |
+
|
819 |
+
| | SSIM ↑ | | | | |
|
820 |
+
|----------------|------------------------|-----------|-----------|-----------|-----------|
|
821 |
+
| | number of measurements | | | | |
|
822 |
+
| 1 | 10 | 100 | 300 | 500 | |
|
823 |
+
| (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | |
|
824 |
+
| LISTA | 1.34±0.02 | 3.12±0.02 | 5.98±0.01 | 6.74±0.01 | 6.96±0.01 |
|
825 |
+
| ALISTA | 0.84±0.01 | 0.94±0.01 | 1.70±0.01 | 5.71±0.01 | 6.65±0.01 |
|
826 |
+
| NALISTA | 0.91±0.01 | 1.12±0.01 | 2.46±0.01 | 7.03±0.01 | 8.22±0.02 |
|
827 |
+
| A-DLISTA (our) | 1.21±0.02 | 3.58±0.01 | 5.66±0.01 | 6.47±0.01 | 6.84±0.01 |
|
828 |
+
|
829 |
+
| SSIM ↑ | | | | | |
|
830 |
+
|------------------------|-----------|-----------|-----------|-----------|-----------|
|
831 |
+
| number of measurements | | | | | |
|
832 |
+
| 1 | 10 | 100 | 300 | 500 | |
|
833 |
+
| (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | (×e −1 ) | |
|
834 |
+
| LISTA | 2.52±0.01 | 3.19±0.01 | 4.48±0.01 | 6.29±0.01 | 6.74±0.01 |
|
835 |
+
| ALISTA | 0.21±0.03 | 0.54±0.02 | 0.88±0.01 | 3.54±0.01 | 5.52±0.01 |
|
836 |
+
| NALISTA | 1.32±0.02 | 1.32±0.02 | 1.06±0.02 | 4.59±0.01 | 6.88±0.01 |
|
837 |
+
| A-DLISTA (our) | 2.91±0.02 | 3.07±0.01 | 4.26±0.01 | 5.89±0.01 | 6.56±0.01 |
|
838 |
+
|
839 |
+
Table 7: CIFAR10 SSIM (the higher the better) for a different number of measurements with fixed sensing matrix, i.e., Φi → Φ. Note that whenever there is agreement within the error for the best performances, we highlight all of them.
|
840 |
+
|
841 |
+
## F.2 Classical Baselines
|
842 |
+
|
843 |
+
We report additional results concerning classical dictionary learning methods tested on the MNIST and CIFAR10 datasets. It is worth noting that classical baselines can reconstruct images with high quality if it is assumed that there are neither computational nor time constraints (although this would correspond to an unrealistic scenario concerning real-world applications). Therefore, while tuning hyperparameters, we consider a number of iterations up to a several thousand.
|
844 |
+
|
845 |
+
Figure 8 to Figure 13 showcase examples of reconstructed images for different baselines.
|
846 |
+
|
847 |
+
![25_image_0.png](25_image_0.png)
|
848 |
+
|
849 |
+
Figure 8: Example of reconstructed MNIST images using the canonical basis. Top row: reconstructed
|
850 |
+
|
851 |
+
![25_image_1.png](25_image_1.png)
|
852 |
+
|
853 |
+
images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.
|
854 |
+
|
855 |
+
Figure 9: Example of reconstructed MNIST images using the wavelet basis. Top row: reconstructed images.
|
856 |
+
|
857 |
+
Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.
|
858 |
+
|
859 |
+
![26_image_0.png](26_image_0.png)
|
860 |
+
|
861 |
+
Figure 10: Example of reconstructed MNIST images using SPCA. Top row: reconstructed images. Bottom
|
862 |
+
|
863 |
+
![26_image_1.png](26_image_1.png)
|
864 |
+
|
865 |
+
row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible. Figure 11: Example of reconstructed CIFAR10 images using the canonical basis. Top row: reconstructed images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.
|
866 |
+
|
867 |
+
![27_image_0.png](27_image_0.png)
|
868 |
+
|
869 |
+
Figure 12: Example of reconstructed CIFAR10 images using the wavelet basis. Top row: reconstructed
|
870 |
+
|
871 |
+
![27_image_1.png](27_image_1.png)
|
872 |
+
|
873 |
+
images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.
|
874 |
+
|
875 |
+
Figure 13: Example of reconstructed CIFAR10 images using SPCA. Top row: reconstructed images. Bottom row: ground truth images. To reconstruct images we use 500 measurements and the number of layers optimized to get the best reconstruction possible.
|
AQk0UsituG/AQk0UsituG_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 28,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 1,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 1,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 28,
|
14 |
+
"code": 0,
|
15 |
+
"table": 7,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 59,
|
18 |
+
"unsuccessful_ocr": 5,
|
19 |
+
"equations": 64
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|