Abstract
We consider the recovery of sparse signals that share a common support from multiple measurement vectors. The performance of several algorithms developed for this task depends on parameters like dimension of the sparse signal, dimension of measurement vector, sparsity level, measurement noise. We propose a fusion framework, where several multiple measurement vector reconstruction algorithms participate and the final signal estimate is obtained by combining the signal estimates of the participating algorithms. We present the conditions for achieving a better reconstruction performance than the participating algorithms. Numerical simulations demonstrate that the proposed fusion algorithm often performs better than the participating algorithms .
Similar content being viewed by others
References
Cotter S F, Rao B D, Engan K and Kreutz-Delgado K 2005 Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 53(7): 2477–2488
Gorodnitsky I F, George J S and Rao B D 1995 Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm. J. Electroencephal. Clin. Neurophysiol. 95(4): 231–251
Gorodnitsky I F and Rao B D 1997 Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45(3): 600–616
Malioutov D, Cetin M and Willsky A S 2005 A Sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 53(8): 3010–3022
Stoica P and Moses R L 1997 Introduction to spectral analysis. Upper Saddle River, N.J.: Prentice Hall
Cotter S F and Rao B D 2002 Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 50(3): 374–377
Tropp J A 2006 Algorithms for simultaneous sparse approximation. Part II: Convex relaxation. Signal Process. 86(3): 589–602 [Sparse Approximations in Signal and Image Processing Sparse Approximations in Signal and Image Processing]
Wipf D P and Rao B D 2007 An empirical bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 55(7): 3704–3716
Zhang Z and Rao B D 2011 Sparse signal recovery with temporally correlated source vectors using sparse bayesian learning. IEEE J. Sel. Topics Signal Process. 5(5): 912–926
Elad M and Yavneh I (2009) A plurality of sparse representations is better than the sparsest one alone. IEEE Trans. Inf. Theory 55(10): 4701–4714
Ambat S K, Chatterjee S and K V S Hari 2013a Fusion of algorithms for compressed sensing. IEEE Trans. Signal Process. 61(14): 3699–3704
Ambat S K, Chatterjee S and K V S Hari 2013b Fusion of algorithms for compressed sensing. In: Acoustics, speech and signal processing (ICASSP), 2013 IEEE International Conferenceon, pp. 5860–5864
Ambat S K, Chatterjee S and K V S Hari 2014 Progressive fusion of reconstruction algorithms for low latency applications in compressed sensing. Signal Process. 97:146–151
Ambat S K, Saikat Chatterjee and K V S Hari 2012 fusion of greedy pursuits for compressed sensing signal reconstruction. In: 20th European Signal Processing Conference 2012 (EUSIPCO 2012), Bucharest, Romania
Ambat S K, Saikat Chatterjee and K V S Hari 2014a A committee machine approach for compressed sensing signal reconstruction. IEEE Trans. Signal Process. 62(7): 1705–1717
Needell D and Tropp J A 2009 CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3): 301–321
Gribonval R, Rauhut H, Schnass K and Vandergheynst P 2008 Atoms of all channels, unite! average case analysis of multi-channel sparse recovery using greedy algorithms. J. Fourier Anal. Appl. 14(5): 655–687
van den Berg E and Friedlander M 2009 Probing the Pareto Frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2): 890–912
van den Berg E and Friedlander M P 2007 SPGL1: A solver for large-scale sparse reconstruction. http://www.cs.ubc.ca/labs/scl/spgl1
Goldberger A L, Amaral L A N, Glass L, Hausdorff J M, Ivanov P C, Mark R G, Mietus J E, Moody G B, Peng C-K and Stanley H E 2000 PhysioBank, physiotoolkit, and physioNet: Components of a new research resource for complex physiologic signals. Circulation 101(23): e215–e220. Circulation electronic pages: http://circ.ahajournals.org/cgi/content/full/101/23/e215PMID:1085218; doi:10.1161/01.CIR.101.23.e215
Ambat S K, Saikat Chatterjee and K V S Hari 2014b Progressive fusion of reconstruction algorithms for low latency applications in compressed sensing. Signal Process. 97: 146–151
Dai W and Milenkovic O 2009 Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5): 2230–2249
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix
Proof of lemma 1
The proof is inspired by proposition 3.5 by Needell and Tropp [16].
Define set S as the convex combination of all matrices, which are \(R+K\) sparse and have unit Frobenius norm.
Using the relation \(\left\| {\mathbf {AX}}\right\| _F \le \sqrt{1+\delta _{R+K}} \left\| {\mathbf {X}}\right\| _F\), we get
Define
The lemma essentially claims that
To prove this, it is sufficient to ensure that \(Q \subset S\).
Consider a matrix \({\mathbf {X}} \in Q\). Partition the support of \({\mathbf {X}}\) into sets of size \(R+K\). Let set \({\mathcal {I}}_0\) contain the indices of the \(R+K\) rows of \({\mathbf {X}}\), which have largest row \(\ell _2\)-norm, breaking ties lexicographically. Let set \({\mathcal {I}}_1\) contain the indices of the next largest (row \(\ell _2\)-norm) \(R+K\) rows and so on. The final block \({\mathcal {I}}_J\) may have lesser than \(R+K\) components. This partition gives rise to the following decomposition:
where \(\displaystyle \lambda _j =\left\| {\mathbf {X}}|_{I_j}\right\| _F \quad \text {and} \quad {\mathbf {Y}}_j= \lambda _j^{-1} {\mathbf {X}}|_{I_j}.\)
By construction, each matrix \({\mathbf {Y}}_j\) belongs to S because it is \(R+K\) sparse and has unit Frobenius norm. We will show that \(\sum _{j} \lambda _j \le 1\). This implies that \({\mathbf {X}}\) can be written as a convex combination of matrices from the set S. As a result \({\mathbf {X}} \in S\). Therefore, \(Q\subset S\).
Fix some j in the range \(\{1,2, \ldots , J\}\). Then, \({\mathcal {I}}_j\) contains at most \(R+K\) elements and \({\mathcal {I}}_{j-1}\) contains exactly \(R+K\) elements. Therefore,
The last inequality holds because the row \(\ell _2\)-norm of \({\mathbf {X}}\) on the set \(I_{j-1}\) dominates its largest row \(\ell _2\)-norm in \({\mathcal {I}}_j\). Summing these relations, we get
Also, we have \(\lambda _0 = \left\| {\mathbf {X}}|_{I_0}\right\| _F \le \left\| {\mathbf {X}}\right\| _F\). Since \({\mathbf {X}} \in Q\), we conclude that
\(\square \)
Proof of lemma 2
Let \(\mathbf {y}^{(i)}\) denote the \(i\text {th}\) column of matrix \({\mathbf {Y}}\) and \(\mathbf {r}^{(i)}\) denote the \(i\text {th}\) column of matrix \({\mathbf {R}}\), \(i=1,2,\ldots L\). Then, we have from lemma 2 of [22]
Summing the above relation, we obtain
Equivalently, we have
\(\square \)
Rights and permissions
About this article
Cite this article
Deepa, K.G., Ambat, S.K. & Hari, K.V.S. Fusion of sparse reconstruction algorithms for multiple measurement vectors. Sādhanā 41, 1275–1287 (2016). https://doi.org/10.1007/s12046-016-0552-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12046-016-0552-1