Improved iterative shrinkagethresholding for sparse signal recovery via Laplace mixtures models
 612 Downloads
Abstract
In this paper, we propose a new method for support detection and estimation of sparse and approximately sparse signals from compressed measurements. Using a double Laplace mixture model as the parametric representation of the signal coefficients, the problem is formulated as a weighted ℓ_{1} minimization. Then, we introduce a new family of iterative shrinkagethresholding algorithms based on double Laplace mixture models. They preserve the computational simplicity of classical ones and improve iterative estimation by incorporating soft support detection. In particular, at each iteration, by learning the components that are likely to be nonzero from the current MAP signal estimate, the shrinkagethresholding step is adaptively tuned and optimized. Unlike other adaptive methods, we are able to prove, under suitable conditions, the convergence of the proposed methods to a local minimum of the weighted ℓ_{1} minimization. Moreover, we also provide an upper bound on the reconstruction error. Finally, we show through numerical experiments that the proposed methods outperform classical shrinkagethresholding in terms of rate of convergence, accuracy, and of sparsityundersampling tradeoff.
Keywords
Compressed sensing Sparse recovery Gaussian mixture models MAP estimation Mixture models Reweighted ℓ_{1} minimizationAbbreviations
 2LMM
Twocomponent Laplace mixture model
 AMP
Approximate message passing
 BCS
Bayesian compressive sensing
 BP
Basis pursuit
 CS
Compressed sensing
 DCT
Discrete cosine transform
 IRL1
Iterative reweighting ℓ_{1} minimization
 ISTA
Iterative shrinkagethresholding algorithm
 FISTA
Fast iterative shrinkagethresholding algorithm
 MAP
Maximum a posteriori
 MCMC
Markov chain Monte Carlo
 MSE
Mean square error
 TSBCS
Treestructure Bayesian compressive sensing
 WBCS
Waveletbased Bayesian compressive sensing
1 Introduction
In this paper, we consider the standard compressed sensing (CS) setting [1], where we are interested in recovering highdimensional signals \(x^{\star }\in \mathbb {R}^{n}\) from few linear measurements y=Ax^{⋆}+η, where \(A\in \mathbb {R}^{m\times n}\), m≪n, and η is a Gaussian i.i.d. noise. The problem is underdetermined and has infinitely many solutions. However, much interest has been focused on finding the sparsest solution, i.e., the one with the smallest number of nonzero components [2]. This involves the minimization of ℓ_{0} pseudonorm [3], which is NPhard.
A practical alternative is to use the ℓ_{1} regularization, leading to the basis pursuit (BP, [4]) problem in the absence of noise, or the least absolute shrinkage and selection operator (Lasso, [5]) in the presence of noise. They can be efficiently solved by iterative shrinkagethresholding algorithms (ISTA, [6, 7, 8]) that are generally firstorder methods followed by a shrinkagethresholding step. Due to its implementation simplicity and suitability for highdimensional problems, a large effort has been spent to improve their speed of convergence [9, 10, 11, 12], asymptotic performance in the large system limit [13, 14], and ease of use [15].
In a Bayesian framework, ℓ_{1} minimization is equivalent to a maximum a posteriori (MAP) estimate [16] modeling the signal coefficients using a Laplace prior, in the sense that we need to solve the same optimization problem. Although the Laplace probability density function does not provide a relevant generative model for sparse or compressible signals [17], the nondifferentiability at zero of the cost function leads to select a sparse solution, providing empirical success of ℓ_{1} regularization.
However, ℓ_{1} minimization alone does not fully exploit signal sparsity. In fact, in some cases, a support estimate [18] could be employed to reduce the number of measurements needed for good reconstruction via BP or Lasso, e.g., by combining support detection with weighted or truncated ℓ_{1} minimization [19]. The idea of combining support information and signal estimation has appeared in CS literature with several assumptions [20, 21, 22, 23, 24, 25, 26, 27]. For example, in [25], the authors employ as prior information an estimate T of the support of the signal and propose a truncated ℓ_{1} minimization problem. Another piece of literature [28] considers a weighted ℓ_{1} minimization with weights w_{i}=− logp_{i} where p_{i} is the probability that \(x^{\star }_{i} = 0\).
In this paper, we propose an iterative soft support detection and estimation method for CS. It is worth remarking that in our setting prior information on the support T or p_{i} is not available. The fundamental idea is to combine the good geometric properties of the ℓ_{1}cost function associated to the Laplacian prior with a good generative model for sparse and compressible vectors [17]. For this purpose, we use a Laplace mixture model as the parametric representation of the prior distribution of the signal coefficients. Because of the partial symmetry of the signal sparsity, we know that each coefficient should have one out of only two distributions: a Laplace with small variance with high probability and a Laplace with large variance with low probability. We show empirically that this model fits better with the distribution of the Haar wavelet coefficients in test images. Then, we cast the estimation problem as a weighted ℓ_{1} minimization method that incorporates the parametric representation of the signal.
We show that the proposed framework is able to improve a number of existing methods based on shrinkagethresholding: by estimating the distribution of the components that are likely to be nonzero from signal estimates at each iteration (support detection), the shrinkagethresholding step is tuned and optimized, thereby yielding better estimation. As opposed to other adaptive methods [10], we are able to prove, under suitable conditions, the convergence of the proposed tuned method. Moreover, we derive an upper bound on the reconstruction error. We apply this method to several algorithms, showing by numerical simulation that it improves recovery in terms of both speed of convergence and sparsityundersampling tradeoff, while preserving the implementation simplicity.
Compared to the literature on reconstruction methods that combine iterative support detection and weighted ℓ_{1} minimization, the identification of the support is not nested or incremental over time as in [29, 30, 31]. Moreover, the choice of weights in ℓ_{1} minimization is based on the Bayesian rules and a probabilistic model and not on greedy rules as in [19, 32]. This feature also marks the difference with respect to reweighted ℓ_{1}/ ℓ_{2} minimization, where the weights are chosen with the aim of approximating the ℓ_{τ} norm with τ∈(0,1].
1.1 Outline
The paper is organized as follows. In Section 2, the basic CS theory and the classical methods based on ℓ_{1} minimization for sparse recovery are reviewed. The proposed parametric model for sparse or highly compressible signals is described in Section 3 and compared with the related literature. In Section 4, the estimation problem based on Laplace mixture models is introduced and recast as a weighted ℓ_{1} minimization problem. Then, in Section 5, the proposed approach is used to improve a number of existing methods based on shrinkagethresholding. Numerical experiments are presented in Section 6 and some concluding remarks (Section 7) complete the paper. The theoretical results are rigorously proved in Appendices 1, 2, 3, and 4.
1.2 Notation
Given a matrix A, A^{T} denotes its transpose.
2 Mathematical formulation
2.1 Sparse signal recovery from compressed measurements
where \(y\in \mathbb {R}^{m}\) is the observation vector, \(A\in \mathbb {R}^{m\times n}\) is the measurement matrix, and η is an additive noise. For example, in the transform domain compressive signal reconstruction [1], A=ΦΨ, where \(\Psi \in \mathbb {R}^{n\times n}\) is the sparsifying basis (i.e., multiplying by Ψ corresponds to performing inverse transform), the entries of x^{⋆} is the transform coefficient vector that has k nonzero entries, and \(\Phi \in \mathbb {R}^{m\times n}\) is the sensing matrix, whose rows are incoherent with the columns of Ψ.
where λ is a positive regularization parameter.
Even when the vector is not exactly sparse, under compressibility assumptions of the signals to be recovered, the ℓ_{1} regularization provides estimates with a controlled error [33]. More formally, we recall the following definition [17].
Definition 1
(Compressible vectors) A vector \(x\in \mathbb {R}^{n}\) is compressible if, denoted \(\varrho _{k}(x):=\inf _{\z\_{0}\leq k}\zx\\), the relative best kterm approximation error is \(\overline {\varrho }_{k}(x): ={\varrho _{k}(x)}/{\x\}<\!\!\!<1\) for some k< <n.
If \(x\in \mathbb {R}^{n}\) is not exactly sparse but compressible, the support is intended as the set of significant components supp(ϱ_{k}(x)). One drawback of classical ℓ_{1} minimization is that it fails to penalize the coefficients in different ways. In this paper, we propose a new family of methods that incorporate two tasks: iterative support detection and signal recovery.
3 Discussion: learning sparsity models
3.1 A Bayesian view
Therefore, the vectors generated from i.i.d. Laplace distribution are not compressible since we cannot have both κ and ε small at the same time.
In [35], Lasso is proved to provide a robust estimation that is invariant to the signal prior. In sharp contrast, the Bayesian Lasso is able to provide an estimator with minimum mean squared error by incorporating the signal model in the estimation problem [14, 36], but the assumption that the signal prior is known in advance is not reasonable in most practical cases. Hence, it becomes crucial to incorporate in the recovery procedure new tools for adaptively learning sparsity models. Other models have been proposed for compressible signals [37, 38, 39] using more accurate probability density functions than double Laplace distribution. However, two issues generally appear when an accurate but complex signal prior is used: (1) it can be hard to estimate the model parameters and (2) the optimal estimators may not have simple closed form solution and their computation may require high computational work [40]. In fact, although the double Laplace prior is not the most accurate model, this is an especially convenient assumption since the MAP estimator has a simple and closed form [41].
Our goal is to use a compressible distribution as parametric representation of the signal coefficients, able to combine support detection and estimation, and to preserve the simplicity and advantages coming from the Laplace prior assumption.
3.2 Proposed approach: twocomponent Laplace mixture for support detection
This mixture model is completely described by three parameters: the sparsity ratio p< <1/2, α that is expected to be small, and β>α if the signal is sparse. It should be noticed that vectors generated from this distribution are typically compressible according to Definition 1.
Proposition 1
The proof is a consequence of proposition 1 in [17] and is deferred to Appendix 1.
The KullbackLeibler divergence of the best fitting probability models and the empirical probability density function are computed for the two models
Image  Lena  MRIhead  House  Cameraman  Pattern  Barbara  Man  Couple  Plane 

Lap  0.1266  0.2109  0.1593  0.4958  0.6756  0.1449  0.0941  0.1793  0.4044 
2LMM  0.0688  0.0339  0.0688  0.0435  0.0818  0.0651  0.0322  0.0558  0.0906 
4 Method: support detection and sparse signal estimation via the 2LMM
4.1 Estimation using the 2LMM generative model
For convenience, we consider p fixed as a guess of the degree of signal’s sparsity, whereas Θ=(α,β) will be unknown. The choice to keep p fixed does not entail a significant restriction to our analysis.
Proposition 2
and H(t)=−t logt−(1−t) log(1−t)is the natural entropy function with t∈[0,1].
 1.
Introduces the ε parameter, which is a regularization term used to avoid singularities whenever one of the Laplace components collapses onto a specific data point since we expect that α≈0, since we seek a sparse solution; this fact will be clear later;
 2.
Introduces the constraint π∈Σ_{n−K}, which enforces a sparse solution.
It should be noted that there is not a closed form solution to problems (10) and (12).
However, partial minimizations of V_{ε}=V(x,π,α,β,ε) with respect to just one of the variables have simple representation. More precisely, we have the following expressions (see Lemma 3 in Appendix 3).
Proposition 3
In the following section, we present several iterative algorithms to approximately solve these optimization problems.
5 Proposed iterative methods and main results
5.1 Iterative shrinkage/thresholding algorithms
The literature describes a large number of approaches to address minimization of (2) and (3). Popular iterative methods belong to the class of iterative shrinkagethresholding algorithms. These methods can be understood as a special proximal forward backward iterative scheme [42] and are appealing as they have lower computational complexity per iteration and lower storage requirements than interiorpoint methods. In fact, these types of recursions are a modification of the gradient method to solve a least square problem, where the dominant computational effort lies in a relatively cheap matrixvector multiplication involving A and A^{⊤} and the only difference consists in the application of a shrinkage/soft thresholding operator, which promotes sparsity of the estimate at each iteration.
The simplest form, known as iterative shrinkagethresholding algorithms (ISTA, [6]), considers u^{(t)}=0 and \(\tau ^{(t)}=\tau <2\A\_{2}^{2}\) for all \(t\in \mathbb {N}\). This algorithm is guaranteed to converge to a minimizer of the Lasso [6]. Moreover, as shown in [43], if A fulfills the socalled finite basis injectivity condition, the convergence is linear. However, the factor determining the speed within the class of linearly convergent algorithms depends on local well conditioning of the matrix A, meaning that ISTA can converge arbitrarily slowly in some sense, which is also often observed in practice.
In order to speed up ISTA, alternative algorithms have exploited preconditioning techniques or adaptivity, combining a decreasing thresholding strategy with adaptive descent parameter. However, the lack of a modelbased thresholding policy makes this algorithm very sensitive to the signal statistics and the accuracy is not always guaranteed. In [13], the thresholding and descent parameters are optimally tuned in terms of phase transitions, i.e., they maximize the number of nonzeros at which the algorithm can successfully operate. However, preconditioning can be very expensive and there is no proof of convergence for adaptive methods.
In this section, we show how to adapt these numerical methods to solve the weighted minimization problem via 2LMM.
5.2 2LMMtuned iterative shrinkagethresholding
 1.
Let t:=0 and set an initial estimate K for the sparsity level, p=K/n, a small value α^{(0)}≈0 (e.g., α^{(0)}=0.1), the initial configuration Open image in new window , and ε^{(0)}=1. Since π^{(0)}=1, β^{(0)} can be arbitrary since it is not used in the first step of the algorithm.
 2.Given the observed data y and current parameters π_{i},α,andβ, a new estimation x^{(t+1)} of the signal is obtained by moving in a minimizing direction of weighted Lasso$$ F(x)= \frac{1}{2}\Axy\^{2}+\lambda\sum_{i=1}^{n}\omega_{i}^{(t+1)}x_{i} $$(18)
with \(\omega _{i}^{(t+1)}=\pi _{i}/\alpha +(1\pi _{i})/\beta \); in other terms x^{(t+1)} is such that F(x^{(t+1)})≤F(x^{(t)}).
 3.
The posterior distribution of the signal coefficients is evaluated and thresholded by keeping its n−K biggest elements and setting the others to zero. It is worth remarking that this step differs from the Estep of a classical EM algorithms as a thresholding operator σ_{n−K} is applied in order to promote the sparsity in the probability vector π.
 4.
Given the probabilities, we use them to reestimate the mixture parameters α^{(t)} and β^{(t)}.
 5.
Set t:=t+1 and iterate until the stopping criteria is satisfied, e.g., until the estimate stops changing ∥x^{(t+1)}−x^{(t)}∥/∥x^{(t)}∥<tol for some tol>0.
5.3 Relation to prior literature
As already observed, Algorithm 1 belongs to the more general class of methods for weighted ℓ_{1} norm minimization [45, 46, 47] (see (18)). Common strategies for iterative reweighting ℓ_{1} minimization (IRL1, [45]) that have been explored in literature recompute weights at every iteration using the estimate at the previous iteration \({\omega _{i}^{(t+1)}=\chi /(x^{(t)}_{i}+\epsilon)}\) where χ and ε are appropriate positive constants. In Algorithm 1, the weights \(\omega _{i}^{(t)}\) are chosen to jointly fit the signal prior and, consequently, depend on all components of the signal and not exclusively on the value \(x^{(t)}_{i}\). Our strategy is also related to thresholdISD [19] that incorporates support detection in the weighted ℓ_{1} minimization and runs as fast as the basis pursuit. Given a support estimate, the estimation is performed by solving a truncated basis pursuit problem. Also in [48], an iterative algorithm, called WSPGL1, is designed to solve a sequence of weighted LASSO using a support estimate, derived from the data, and updated at every iteration. Compared to thresholdISD and WSPGL1, 2LMMtuned iterative shrinkagethresholding does not use binary weights and is more flexible. Moreover, in thresholdISD, like CoSaMP, the identification of the support is based on greedy rules and not chosen to optimally fit the prior distribution of the signal.
A prior estimation based on EM was incorporated within the AMP framework also in [49] where a Gaussian mixture model is used as the parametric representation of the signal. The key difference in our approach is that model fitting is used to estimate the support and to adaptively select the best thresholding function with the minimum mean square error. The necessity of selecting the best thresholding function is also proposed in parametric SURE AMP [50] where a class of parametric denoising functions is used to adaptively choose the bestinclass denoiser. However, at each iteration, parametric SURE AMP needs to solve a linear system and the number of parameters affects heavily both performance and complexity.
5.4 Convergence analysis
Under suitable conditions, we are able to guarantee the convergence of the iterates produced by Algorithm I and discuss sufficient condition for optimality.
Definition 2
Theorem 1
If (x^{∗},π^{∗},α^{∗},β^{∗},ε) is a minimizer of (12) then it is a τstationary point of (12) with \(\tau <2\A\_{2}^{2}.\) Viceversa, if (x^{∗},π^{∗},α^{∗},β^{∗},ε) is a τstationary point of (12) with \(\tau <2\A\_{2}^{2}\), then it is a local minimizer of (12).
The proof can be obtained with similar techniques, devised in [51], and we omit the proof for brevity. This result provides a necessary condition for optimality and shows that, being the function in (12) not convex, τstationarity points are only local minima. The next theorem ensures that also the sequence (ζ^{(t)}) converges to a limit point which is also a τstationary point of (12) of the algorithm and, from Theorem 1, a local minimum for (12). Moreover, in Theorem 3, we derive an upper bound on the reconstruction error.
Theorem 2
(2LMMISTA convergence). Let us assume that for every index set Γ⊆[n]=K, the columns of A associated with Γ are linearly independent, \(\tau ^{(t)} =\tau <2\A\_{2}^{2}\), u^{(t)}=0. Then for any \(y\in \mathbb {R}^{m}\), the sequence ζ^{(t)}=(x^{(t)},π^{(t)},α^{(t)},β^{(t)},ε^{(t)}) generated by Algorithm 1 converges to (x^{∞},π^{∞},α^{∞},β^{∞},ε^{∞}) which satisfies relations in (19).
Definition 3
Theorem 3
It should be noticed that the result in Theorem 3 implies that the mean square error \(\mathsf {MSE}=\e\^{2}/n=\left (c+\frac {\lambda \sqrt {K}}{\beta ^{\infty }}\right)^{2}/n+O(\delta _{2K})\). In this sense, we have provided conditions verifiable a posteriori for convergence in a neighborhood of the solution. This is a common feature in shrinkagethresholding methods. In [52], it is shown that, in the absence of noise, if certain conditions are satisfied, the error provided by Lasso is O(λ), where λ is the regularization parameter. Since ISTA and FISTA converge to a minimum of the Lasso, we argue that the same estimate holds also for the error between the provided estimations and the true signal.
The proof of Theorems 2 and 3 are postponed to Appendices 3 and 4 and are obtained using arguments of variational analysis and analysis of τstationary point of (12), respectively.
Example 1
Computing δ_{2K} is hard in practice. However, for i.i.d Gaussian and Rademacher matrices, the RIP holds with high probability when m≥c_{0}(δ/2)k log(ne/k) where c_{0} is a function of isometry constant δ[34]. To give an example, if n=10000, m=8000, and k=10, then the RIP property holds with probability 0.98 with isometry constant equal to δ_{2k}=0.4. Running the proposed iterative algorithms (ISTA or FISTA) with λ=10^{−3}, we can empirically check that the condition \(\\sigma _{K}(A^{\top }_{(\Lambda ^{\infty })^{\mathrm {c}}}(yAx^{\infty }))\\leq \A_{\Lambda ^{\infty }}^{\top }(yAx^{\infty })\\leq c\) is always satisfied with c=1.7176·10^{−4}. The error of the provided estimate is MSE=∥x^{⋆}−x∥^{2}/n≈1.47·10^{−10} and the estimated upper bound, obtained using Theorem 3, is 5.143·10^{−10}. Using error bounds in [52], we are able to guarantee that the solutions provided by ISTA and FISTA are accurate with an error only proportional to λ=10^{−3}.
6 Numerical results, experiments, and discussion
In this section, we compare several firstorder methods with their versions augmented by the support estimation, in terms of convergence times and empirical probability of reconstruction in the absence and in the presence of noise. It is worth remarking that this does not represent a challenge among all firstorder methods for compressed sensing. Our aim is to show that the combination of support detection and estimation using an iterative reweighted firstorder method can improve several iterative shrinkage methods. In other terms, we want to show that, given a specific algorithm for CS, the speed and the performance can be improved via its 2LMM counterpart. Moreover, in order to show that the choice of the weights is important to obtain fast algorithms and good performance, we have employed an iterative shrinkage method for iterative reweighted ℓ_{1} minimization algorithm (IRL1). In [45], IRL1 requires to solve at each step a weighted ℓ_{1} minimization. This algorithm has computational complexity which is not comparable with the iterative shrinkage/thresholding algorithms since each iteration has complexity of order O(n^{3}). We employ a shrinkagethresholding method for IRL1 in the spirit of [53] and show that the performance are not as good as in the proposed methods. What we mean as IRL1 is summarized in Algorithm 2.
6.1 Reconstruction from noisefree measurements
6.1.1 Rate of convergence
As a first experiment, we consider BernoulliGaussian signals [54]. More precisely, the signal to be recovered has length n=560 with k=50 nonzero elements drawn from a N(0,4), respectively. The sensing matrix A with m=280 rows is sampled from the Gaussian ensemble with zero mean and variance 1/m. We fix λ=10^{−3} and τ=0.19, and the mixture parameters are initialized Open image in new window , K=k+10, and p=K/n.

ISTA and FISTA, under certain conditions, are analytically proved to converge to a minimum of Lasso. This solution is shown to provide only an approximation of the sparse solution x^{⋆} which is controlled by Lasso parameter λ. More precisely, \(\x^{\star }\widehat {x}\_{2}\leq C\lambda \) where \(C\in \mathbb {R}\) and perfect reconstruction is not guaranteed.

AMP instead is not proved to converge in a deterministic sense. In [14], only the average case performance analysis is carried out. The authors exploit the randomness of A and instead of calculating the limit of ∥x^{t}−x^{⋆}∥^{2}, they show the convergence in the mean square sense \( \mathbb {E}\x^{t}x^{\star }\^{2}\rightarrow 0\).
In Fig. 5 the accuracy of the solution provided by ISTA, FISTA, and AMP are different. The difference of AMP has been already explained. The difference between ISTA and FISTA is due to the fact that λ=0.005 for ISTA (to speed up the algorithm) and λ=0.001 for FISTA. As already observed, we are not interested in a challenge among all firstorder methods for CS. Our aim is to show that the combination of support detection and estimation using an iterative reweighted firstorder method can improve a series of iterative shrinkage methods. More precisely, given a specific algorithm for CS, the speed can be improved via its 2LMM counterpart.
It should be noted that the proposed algorithms are much faster than classical iterative shrinkagethresholding methods: there is about a 81,37, and 35% of reduction in the number of iterations needed for the convergence of 2LMMISTA, 2LMMFISTA, and 2LMMAMP, respectively.
6.1.2 Effect of the prior distribution
We now show a second experiment: we fix n=512 and take the fraction of the nonzero coefficients fixed to ρ=k/n and we study the effect of the nonzero coefficients distribution on the empirical probability of reconstruction for different values of k∈[1,250]. More precisely, \(x^{\star }_{i}\sim (1\rho)\delta _{0}(x^{\star }_{i})+\rho g(x^{\star }_{i})\) where g is a probability distribution function and δ_{0} is the Dirac delta function. In Table 1, the acronyms of the considered distributions are summarized (see also [55]).
Nonzero coefficients distribution
Notation  g 

L1  Lap(0,4) 
G1  N(0,4) 
U1  U[0,4] 
5P1  \(\mathbb {P}(x=1)=\mathbb {P}(x=1)=0.3\) 
\(\mathbb {P}(x=5)=\mathbb {P}(x=5)=0.2\) 
It should be noticed that for ISTA, IRL1, and 2LMMISTA, we have chosen λ=0.005, instead of λ=0.001 as in FISTA and 2LMMFISTA, in order to speed up the convergence. The convergence of ISTA and IRL1 are extremely slow, and before 1000 iterations, we get only an approximation with an error of order 10^{−3} for sparsity larger than 50 in most of the cases.
It should be noticed that the 2LMMtuning improves the performance of iterative shrinkagethresholding methods in terms of sparsityundersampling tradeoff. For example for 5P1, it turns out that the signal recovery with 2LMMtuning is possible with 30%, 63% sparsity level higher than FISTA, and AMP, respectively.
Figures 6, 7, 8, and 9 (right) show the average running times (CPU times in seconds) of the algorithms computed over the successful experiments, and the error bar represents the standard deviation of uncertainty for different signal priors. These graphs demonstrate the benefit of 2LMMtuning for iterative shrinkage/thresholding methods. Not only 2LMMtuning shows better performance in the reconstruction but it also runs much faster than traditional methods. Despite the additional per iteration computational cost needed to update the mixture parameters, the gain of the 2LMMtuning ranges from 2 to over 6 times, depending on the signal prior. The algorithm efficiency can be attributed to the simple form of the model used as parametric representation of the signal and the improved runtime performance comes from the effective denoising so that fewer iterations are required to converge.
6.2 Reconstruction in imperfect scenarios
In this section, we compare the firstorder methods with their versions augmented by the support estimation for recovery of signals in imperfect scenarios where the signal is not exactly sparse or the measurements are noisy.
6.2.1 Not exactly sparse signals
6.2.2 Reconstruction with noisy measurements
We now fix n=512 and we study the performance of the proposed methods in scenarios with inaccurate measurements according to (1). In this case, x^{⋆} is a random BernoulliUniform signal (see U1 in Table 2) with sparsity degree k=56 and the noise η is white Gaussian noise with standard deviation σ=0.01. In this case, the parameters are set as follows: Open image in new window , K=k+10, and p=K/n. The MSE achieved by 3000 iterations is depicted as a function of the number of measurements used in the reconstruction. Also in this case, the best results are obtained by methods with 2LMMtuning. The efficiency of the proposed algorithms allows to reduce the number of measurements required to achieve a satisfactory level of accuracy. As can be noticed from Fig. 10 (right), 2LMMFISTA and 2LMMAMP need fewer observations (about 180 measurements) than FISTA and AMP (about 270 measurements) to achieve MSE =10^{−4}.
As to be expected, as the SNR increases the MSE goes to zero. Moreover, the MSE of the proposed algorithms are smaller than those obtained via classical iterative thresholding algorithms. As already observed, the MSE of ISTA is very high compared to the other methods. This is due to the fact that the algorithm is very slow and the number of iterations are not enough to reach a good recovery error.
In our setup, we have considered only Gaussian Noise and the robustness against multiplicative noise is out of our scope. This would require a drastic modification of the proposed algorithms and will be subject of future research. For example, when the underlying sparse or approximately sparse signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise, standard CS techniques cannot be applied directly, as the Poisson noise is signaldependent [56,57]. In this case, the rationale of our method can be adapted combining the use of mixtures models with exponential distribution instead Laplace distribution with penalized negative Poisson loglikelihood objective function with nonnegativity constraints. We refer to [58] for more details on the model and on the implementations of related iterative thresholding algorithms.
6.3 Comparison with structured sparsitybased Bayesian compressive sensing
Many authors have recently developed structured sparsitybased Bayesian compressive sensing methods in order to deal with different signals arising in several applications and adaptively explore the statistical structure of nonzero pattern. We refer the interested reader to the repository http://people.ee.duke.edu/~lcarin/BCS.html for an introduction to Bayesian compressive sensing (BCS) methods and to structured sparsitybased Bayesian compressive sensing.
For example, [60] proposes a spatiotemporal sparse Bayesian method to recover multichannel signals simultaneously, not only exploiting temporal correlation within each channel signal but also exploiting interchannel correlations among different signals. This method has been shown to provide several advantages in applications in brain computer interface and electroencephalographybased driver’s drowsiness estimation in terms of measurements for reconstruction and computational load. In [61], using a new bilinear timefrequency representation, a redesigned BCS approach is developed for the problem of spectrum estimation of multiple frequencyhopping signals, arising in various communication and radar applications in the context of multipleinput multipleoutput (MIMO) operations in the presence of random missing observations. Another example of structured sparsitybased Bayesian compressive sensing comes from the context of reconstruction of signals and images that are sparse in the wavelet basis [62] or in DCT basis with applications to image compression. More precisely, in [62], the statistical structure of the wavelet coefficients is exploited explicitly using a treestructured Bayesian compressive sensing approach. This tree structure assumption shows several advantages in terms of number of measurements required for reconstruction.
It is worth remarking that in our approach, we do not use any prior information on the structure of the sparsity pattern and we expect that structured sparsitybased methods outperform our approach. A detailed comparison and an ad hoc adaptation of our approach to all specific frameworks mentioned above is out of the scope of this paper. However, in this section, we propose a numerical comparison of 2LMMtuning FISTA with treestructured waveletbased Bayesian compressive sensing (WBCS). The implementation of WBCS algorithm used for comparison are implemented via a hierarchical Bayesian framework, with the tree structure incorporated naturally in the prior setting. See TSBCS for wavelet via MCMC in the repository http://people.ee.duke.edu/~lcarin/BCS.html for a detailed description of the code.
6.4 Deblurring images
In order to show the effectiveness of the 2LMMtuning, in this section, we repeat the same experiment proposed in [9] for deblurring two test images (Lena and cameraman). In [9], it has been shown that FISTA significantly outperforms ISTA and other firstorder methods in terms of the number of iterations required to achieve a given accuracy. For this reason, we compare the performance of FISTA with our proposed algorithm 2LMMFISTA.
7 Conclusions
In this paper, we proposed a new method to perform both support detection and sparse signal estimation in compressed sensing. Combining MAP estimation with the parametric representation of the signal with a Laplace mixture model, we formulated the problem of reconstruction as a reweighted ℓ_{1} minimization. Our contribution includes theoretical derivation of necessary and sufficient conditions for reconstruction in the absence of noise. Then, 2LMMtuning has been proposed to improve the performance of several iterative shrinkagethresholding algorithms. Iterative procedures have been designed by combining EM algorithm with classical iterative thresholding methods. Numerical simulations show that these new algorithms are faster than classical ones and outperform them in terms of phase transitions. Topics of our current research is to use similar technique based on Laplace mixture models for robust compressed sensing, where measurements are corrupted by outliers (see [63] and reference therein).
8 Appendix 1: proof of Proposition 1
he proof of Proposition 1 is a direct consequence of a more general result on compressible prior result, formally stated in Proposition 1 in [17].
Lemma 1
9 Appendix 2: proof of Proposition 2
Lemma 2
Proof
Proposition 4
H(t)=−t logt−(1−t) log(1−t) is the natural entropy function with t∈[0,1] and \(\widehat {\pi }=\widehat {\pi }_{i}(x_{i})=f(z_{i}=1x_{i};\Theta)\).
Proof
which gives (9). □
10 Appendix 3: proof of Theorem 2
n this section, we prove rigorously Theorem 2, which guarantees the convergence of 2LMMISTA to a limit point. We start from the following preliminary results.
where \(H:\left [0,1\right ]\rightarrow \mathbb {R}\) is the natural entropy function H(ξ)=−ξ logξ−(1−ξ) log(1−ξ). and J_{ε} is defined in (11).
Lemma 3
Proof
Checking \(\frac {\partial ^{2} V}{\partial \alpha ^{2}}(\widehat {\alpha })\geq 0\), we conclude that \(\widehat {\alpha }\) is the minimizing value of V(x,π,α,β,ε).
In analogous way, the expression for \(\widehat {\beta }\) can be derived.
We now show that \(\widehat {\pi }=\sigma _{nK}(a)\) is the minimizing value of V(x,π,α,β,ε) for fixed x,α,β,ε, i.e., \(V(x,\widehat {\pi },\alpha,\beta,\epsilon)\leq V(x,{\pi },\alpha,\beta,\epsilon)\) for all π∈Σ_{n−K}.
being γ∥π∥_{0}=(n−K)γ just a constant for π∈Σ_{n−K}.
Lemma 4
Proof
where χ is a function independent of x. By differentiating the function with respect x, we obtain the thesis. □
Proposition 5
The function V defined in (22) is not increasing along the iterates ζ^{(t)} = (x^{(t)},π^{(t)},α^{(t)},β^{(t)},ε^{(t)}).
Proof
The following lemma implies that these algorithms converge numerically when the number of iterations goes to infinity.
Lemma 5
Let (x^{(t)}) be the sequence generated by 2LMMISTA, then x^{(t+1)}−x^{(t)}→0 as t→∞.
Proof
 (a)
Follows from (25)
 (b)Follows from the fact that(see Lemma 4)$$x^{(t+1)}=\underset{x\in\mathbb{R}^{n}}{\text{arg\ min}} V^{\mathcal{S}}\left(x,x^{(t)},\pi^{(t)},\alpha^{(t)},\beta^{(t)},{\epsilon^{(t)}}\right) $$
 (c)Is a consequence of the following relations$$\begin{array}{*{20}l} V^{\mathcal{S}}\!\left(x^{(t)}\!,x^{(t)}\!,\pi^{(t)}\!,\alpha^{(t)}\!,\beta^{(t)}\!,{\epsilon^{(t)}}\!\right)\! =\!V\!\left(x^{(t)},\pi^{(t)},\alpha^{(t)},\beta^{(t)},{\epsilon^{(t)}}\right)\end{array} $$and(see Lemma 3);$$\pi^{(t+1)}=\underset{\pi\in\Sigma_{nK}}{\text{arg\ min}} V\left(x^{(t+1)},\pi,\alpha^{(t)},\beta^{(t)},{\epsilon^{(t)}}\right) $$
 (d)and (e) Are following from(see Lemma 3) and from the fact that V(x,π,α,β,ε) is an increasing function in ε and ε^{(t+1)}≤ε^{(t)} by definition.$$\begin{aligned} \alpha^{(t+1)}&=\underset{\alpha}{\text{arg\ min}} V\left(x^{(t+1)},\pi^{(t+1)},\alpha,\beta^{(t)},{\epsilon^{(t)}}\right), \\ \beta^{(t+1)}&=\underset{\beta}{\text{arg\ min}} V\left(x^{(t+1)},\pi^{(t+1)},\alpha^{(t+1)},\beta,{\epsilon^{(t)}}\right) \end{aligned} $$
Lemma 6
The sequence \((x^{(t)})_{t\in \mathbb {N}}\) is bounded
Proof
The application of hard thresholding yields \(\pi ^{(t_{\ell })}=\sigma _{nK}\left (a^{(t_{\ell })}\right),\) and we have \(\pi _{i}^{(t_{\ell })}=0\) and \(\omega _{i}^{(t_{\ell }+1)}={1}/{{\beta ^{(t_{\ell })}}} \overset {q\rightarrow \infty }{\longrightarrow }0\) for all i∈Δ.
If i∈Δ^{c}, then \(x_{i}^{t_{\ell }}\rightarrow 0\) as ℓ→∞ and, consequently, from Lemma 5 also \(x_{i}^{t_{\ell }+1}\rightarrow 0\) as ℓ→∞.
Proposition 6
Any accumulation point is a τstationary point of (12) of the algorithm and satisfies the equalities in (19a)(19e).
Proof
(b) There exists \({\ell _{0}}\in \mathbb {N}\) such that \(\forall \ell >\ell _{0}\ \pi _{i}^{(t_{\ell })}=0\), from which \( a_{i}^{(t_{\ell })}<a_{j}^{(t_{\ell })},\ \forall j\in \text {supp}\left (\pi ^{\sharp }\right) \) and by letting ℓ→∞ we get (30).
11 Appendix 4: Proof of Theorem 3:
Notes
Acknowledgements
The authors thank the European Research Council for the financial support for this research. They express their sincere gratitude to the Reviewers for carefully reading the manuscript and for their valuable suggestions.
Funding
This work has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/20072013) / ERC grant agreement no. 279848.
Availability of data and materials
Please contact the corresponding author (chiara.ravazzi@ieiit. cnr.it) for data requests.
Authors’ contributions
Both authors contributed equally, read and approved the manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.DL Donoho, Compressed sensing. IEEE Trans. Inform. Theory. 52:, 1289–1306 (2006).MathSciNetCrossRefzbMATHGoogle Scholar
 2.AK Fletcher, S Rangan, VK Goyal, K Ramchandran, Denoising by sparse approximation: error bounds based on ratedistortion theory. EURASIP J. Adv. Sig. Process. 2006(1), 026318 (2006). https://doi.org/10.1155/ASP/2006/26318.
 3.MA Davenport, MF Duarte, YC Eldar, G Kutyniok, Introduction to Compressed Sensing (Cambridge University Press, Cambridge, 2012).CrossRefGoogle Scholar
 4.EJ Candès, JK Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pur. Appl. Math.59(8), 1207–1223 (2006).MathSciNetCrossRefzbMATHGoogle Scholar
 5.R Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. 58:, 267–288 (1994).MathSciNetzbMATHGoogle Scholar
 6.I Daubechies, M Defrise, C De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math.57(11), 1413–1457 (2004).MathSciNetCrossRefzbMATHGoogle Scholar
 7.ET Hale, W Yin, Y Zhang, A fixedpoint continuation method for l1regularized minimization with applications to compressed sensing. Technical report, Rice University (2007).Google Scholar
 8.SJ Wright, RD Nowak, MAT Figueiredo, Sparse reconstruction by separable approximation. Trans. Sig. Proc.57(7), 2479–2493 (2009).MathSciNetCrossRefzbMATHGoogle Scholar
 9.A Beck, M Teboulle, A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Img. Sci.2(1), 183–202 (2009).MathSciNetCrossRefzbMATHGoogle Scholar
 10.M Fornasier, Theoretical Foundations and Numerical Methods for Sparse Recovery (Radon Series on Computational and Applied Mathematics, Linz, 2010).CrossRefzbMATHGoogle Scholar
 11.W Guo, W Yin, Edge guided reconstruction for compressive imaging. SIAM J. Imaging Sci. 5(3), 809–834 (2012). https://doi.org/10.1137/110837309.
 12.W Yin, S Osher, D Goldfarb, J Darbon, Bregman iterative algorithms for ℓ _{1}minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008). https://doi.org/10.1137/070703983.
 13.A Maleki, DL Donoho, Optimally tuned iterative reconstruction algorithms for compressed sensing. IEEE J. Sel. Top. Sig. Process. 4(2), 330–341 (2010). https://doi.org/10.1109/JSTSP.2009.2039176.
 14.DL Donoho, A Maleki, A Montanari, in Proc. of 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo). Message passing algorithms for compressed sensing: I. motivation and construction, (2010), pp. 1–5.Google Scholar
 15.T Goldstein, C Studer, R Baraniuk, A field guide to forwardbackward splitting with a FASTA implementation, 1–24 (2014). arXiv eprint. https://arxiv.org/abs/1411.3406.
 16.CM Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics) (Springer, Secaucus, 2006).zbMATHGoogle Scholar
 17.R Gribonval, V Cevher, ME Davies, Compressible distributions for highdimensional statistics. IEEE Trans. Inf. Theory. 58(8), 5016–5034 (2012).MathSciNetCrossRefzbMATHGoogle Scholar
 18.J Zhang, Y Li, Z Yu, Z Gu, Noisy sparse recovery based on parameterized quadratic programming by thresholding. EURASIP J. Adv. Sig. Proc.2011: (2011). https://doi.org/10.1155/2011/528734.
 19.Y Wang, W Yin, Sparse signal reconstruction via iterative support detection. SIAM J. Img. Sci.3(3), 462–491 (2010).MathSciNetCrossRefzbMATHGoogle Scholar
 20.C Hegde, MF Duarte, V Cevher, in Proceedings of the Workshop on Signal Processing with Adaptive Sparse Representations (SPARS). Compressive Sensing Recovery of Spike Trains Using Structured Sparsity (Saint Malo, 2009). https://infoscience.epfl.ch/record/151471/files/Compressive%20sensing%20recovery%20of%20spike%20trains%20using%20a%20structured%20sparsity%20model.pdf.
 21.RG Baraniuk, V Cevher, MF Duarte, C Hegde, Modelbased compressive sensing. IEEE Trans. Inf. Theor.56(4), 1982–2001 (2010).MathSciNetCrossRefzbMATHGoogle Scholar
 22.R von Borries, CJ Miosso, C Potes, in Computational Advances in MultiSensor Adaptive Processing, 2007. CAMPSAP 2007. 2nd IEEE International Workshop On, Compressed sensing using prior information. (2007), pp. 121–124.Google Scholar
 23.N Vaswani, W Lu, in Information Theory, 2009. ISIT 2009. IEEE International Symposium On, ModifiedCS: Modifying compressive sensing for problems with partially known support. (2009), pp. 488–492.Google Scholar
 24.OD Escoda, L Granai, P Vandergheynst, On the use of a priori information for sparse signal approximations. IEEE Trans. Sig. Process. 54(9), 3468–3482 (2006).CrossRefzbMATHGoogle Scholar
 25.MA Khajehnejad, W Xu, AS Avestimehr, B Hassibi, Analyzing weighted ℓ _{1} minimization for sparse recovery with nonuniform sparse models. IEEE Trans. Sig. Process.59(5), 1985–2001 (2011).MathSciNetCrossRefGoogle Scholar
 26.MP Friedlander, H Mansour, R Saab, O Yilmaz, Recovering compressively sampled signals using partial support information. IEEE Trans. Inf. Theory. 58(2), 1122–1134 (2012).MathSciNetCrossRefzbMATHGoogle Scholar
 27.J Scarlett, JS Evans, S Dey, Compressed sensing with prior information: informationtheoretic limits and practical decoders. IEEE Trans. Sig. Process. 61(2), 427–439 (2013).MathSciNetCrossRefGoogle Scholar
 28.MA Khajehnejad, W Xu, AS Avestimehr, B Hassibi, Analyzing Weighted ℓ _{1} Minimization for Sparse Recovery With Nonuniform Sparse Models. IEEE Trans. Sig. Process. 59:, 1985–2001 (2011).MathSciNetCrossRefGoogle Scholar
 29.JA Tropp, AC Gilbert, MJ Strauss, Algorithms for simultaneous sparse approximation: part i: greedy pursuit. Sig. Process.86(3), 572–588 (2006).CrossRefzbMATHGoogle Scholar
 30.JA Tropp, AC Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53:, 4655–4666 (2007).MathSciNetCrossRefzbMATHGoogle Scholar
 31.D Needell, R Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal. Process. 4(2), 310–316 (2010).CrossRefGoogle Scholar
 32.D Needell, JA Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal.26(3), 301–321 (2008). http://arxiv.org/abs/0803.2392.MathSciNetCrossRefzbMATHGoogle Scholar
 33.EJ Candès, The restricted isometry property and its implications for compressed sensing. Compte Rendus de l’Academie des Sciences. Paris, France, ser. I. 346:, 589–592 (2008).MathSciNetzbMATHGoogle Scholar
 34.R Baraniuk, M Davenport, D DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices. Constr. Approx.28(3), 253–263 (2008).MathSciNetCrossRefzbMATHGoogle Scholar
 35.M Bayati, A Montanari, The LASSO risk for Gaussian matrices. IEEE Trans. Inf. Theory. 58(4), 1997–2017 (2012).MathSciNetCrossRefzbMATHGoogle Scholar
 36.S Ji, Y Xue, L Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008).MathSciNetCrossRefzbMATHGoogle Scholar
 37.EP Simoncelli, Modeling the joint statistics of images in the wavelet domain. Proc. SPIE. 3813:, 188–195 (1999). https://doi.org/10.1117/12.366779.
 38.MAT Figueiredo, R Nowak, Waveletbased image estimation: an empirical bayes approach using Jeffreys’ noninformative prior. IEEE Trans. Image Process. 10(9), 1322–1331 (2001).MathSciNetCrossRefzbMATHGoogle Scholar
 39.HY Gao, Wavelet shrinkage denoising using the nonnegative garrote. J. Comput. Graph. Stat. 7(4), 469–488 (1998). https://www.tandfonline.com/doi/abs/10.1080/10618600.1998.10474789.
 40.L Sendur, IW Selesnick, Bivariate shrinkage functions for waveletbased denoising exploiting interscale dependency. Trans. Sig. Proc.50(11), 2744–2756 (2002). https://doi.org/10.1109/TSP.2002.804091.
 41.DL Donoho, Denoising by softthresholding. IEEE Trans. Inf. Theor.41(3), 613–627 (1995). https://doi.org/10.1109/18.382009.
 42.PL Combettes, VR Wajs, Signal recovery by proximal forwardbackward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005).MathSciNetCrossRefzbMATHGoogle Scholar
 43.K Bredies, D Lorenz, Linear convergence of iterative softthresholding. J. Fourier Anal. Appl. 14(56), 813–837 (2008). https://doi.org/10.1007/s0004100890411.
 44.A Montanari, in Compressed Sensing: Theory and Applications, ed. by Y Eldar, G Kutyniok. Graphical models concepts in compressed sensing (Cambridge University PressCambridge, 2012), pp. 394–438.CrossRefGoogle Scholar
 45.EJ Candes, MB Wakin, SP Boyd, Enhancing sparsity by reweighted L1 minimization. Technical report.Google Scholar
 46.MP Friedlander, H Mansour, R Saab, O Yilmaz, Recovering compressively sampled signals using partial support information. IEEE Trans. Inf. Theory. 58(2), 1122–1134 (2012).MathSciNetCrossRefzbMATHGoogle Scholar
 47.MA Khajehnejad, W Xu, AS Avestimehr, B Hassibi, in Proceedings of the 2009 IEEE International Conference on Symposium on Information Theory  Volume 1, ISIT’09. Weighted L1 Minimization for Sparse Recovery with Prior Information (IEEE PressPiscataway, 2009), pp. 483–487. http://dl.acm.org/citation.cfm.
 48.H Mansour, in 2012 IEEE Statistical Signal Processing Workshop (SSP) (SSP’12). Beyond ℓ _{1} norm minimization for sparse signal recovery (IEEEAnn Arbor, 2012). IEEE.Google Scholar
 49.JP Vila, P Schniter, Expectationmaximization Gaussianmixture approximate message passing. IEEE Trans. Sig. Process. 61(19), 4658–4672 (2013).MathSciNetCrossRefGoogle Scholar
 50.C Guo, ME Davies, Near optimal compressed sensing without priors: parametric sure approximate message passing. IEEE Trans. Sig. Process. 63(8), 2130–2141 (2015).MathSciNetCrossRefGoogle Scholar
 51.C Ravazzi, SM Fosson, E Magli, Randomized algorithms for distributed nonlinear optimization under sparsity constraints. IEEE Trans. Sig. Process. 64(6), 1420–1434 (2016).MathSciNetCrossRefGoogle Scholar
 52.MJ Wainwright, Sharp thresholds for highdimensional and noisy sparsity recovery using ℓ _{1}constrained quadratic programming (Lasso). IEEE Trans. Inf. Theory. 55(5), 2183–2202 (2009).MathSciNetCrossRefzbMATHGoogle Scholar
 53.C Chen, J Huang, L He, H Li, Fast iteratively reweighted least squares algorithms for analysisbased sparsity reconstruction. CoRR.1–14 (2014). https://arxiv.org/abs/1411.5057.
 54.C Ravazzi, E Magli, Gaussian mixtures based IRLS for sparse recovery with quadratic convergence. IEEE Trans. Sig. Process. 63(13), 1–16 (2015).MathSciNetCrossRefGoogle Scholar
 55.A Maleki. Approximate message passing algorithms for compressed sensing (Stanford University PhD thesis, 2011). https://www.ece.rice.edu/~mam15/suthesisArian.pdf.
 56.M Raginsky, S Jafarpour, ZT Harmany, RF Marcia, RM Willett, R Calderbank, Performance bounds for expanderbased compressed sensing in poisson noise. IEEE Trans. Sig. Process. 59(9), 4139–4153 (2011). https://doi.org/10.1109/TSP.2011.2157913.
 57.M Raginsky, RM Willett, ZT Harmany, RF Marcia, Compressed sensing performance bounds under poisson noise. IEEE Trans. Sig. Process. 58(8), 3990–4002 (2010). https://doi.org/10.1109/TSP.2010.2049997.
 58.ZT Harmany, RF Marcia, RM Willett, This is spiraltap: sparse poisson intensity reconstruction algorithms? Theory and practice. IEEE Trans. Image Process. 21(3), 1084–1096 (2012). https://doi.org/10.1109/TIP.2011.2168410.
 59.R Gribonval, G Chardon, L Daudet, in IEEE International Conference on Acoustics, Speech, and Signal Processing, Blind calibration for compressed sensing by convex optimization (Kyoto, 2012). https://hal.inria.fr/hal00658579. Accessed 2530 Mar 2012.
 60.Z Zhang, T Jung, S Makeig, Z Pi, BD Rao, Spatiotemporal sparse Bayesian learning with applications to compressed sensing of multichannel physiological signals. IEEE Trans. Neural Syst. Rehabil. Eng. 22:, 1186–1197 (2014).CrossRefGoogle Scholar
 61.S Liu, YD Zhang, T Shan, S Qin, MG Amin, Structureaware Bayesian compressive sensing for frequencyhopping spectrum estimation. Proc. SPIE. 9857:, 9857–9857 (2016).Google Scholar
 62.L He, L Carin, Exploiting structure in waveletbased Bayesian compressive sensing. IEEE Trans. Sig. Process. 57:, 3488–3497 (2009). https://doi.org/10.1109/TSP.2009.2022003.
 63.RE Carrillo, AB Ramirez, GR Arce, KE Barner, BM Sadler, Robust compressive sensing of sparse signals: a review. EURASIP J. Adv. Sig. Process. 2016(1), 108 (2016). https://doi.org/10.1186/s1363401604045.
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.