Skip to main content
Log in

Fusion of sparse reconstruction algorithms for multiple measurement vectors

  • Published:
Sādhanā Aims and scope Submit manuscript

Abstract

We consider the recovery of sparse signals that share a common support from multiple measurement vectors. The performance of several algorithms developed for this task depends on parameters like dimension of the sparse signal, dimension of measurement vector, sparsity level, measurement noise. We propose a fusion framework, where several multiple measurement vector reconstruction algorithms participate and the final signal estimate is obtained by combining the signal estimates of the participating algorithms. We present the conditions for achieving a better reconstruction performance than the participating algorithms. Numerical simulations demonstrate that the proposed fusion algorithm often performs better than the participating algorithms .

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5

Similar content being viewed by others

References

  1. Cotter S F, Rao B D, Engan K and Kreutz-Delgado K 2005 Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 53(7): 2477–2488

    Article  MathSciNet  Google Scholar 

  2. Gorodnitsky I F, George J S and Rao B D 1995 Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm. J. Electroencephal. Clin. Neurophysiol. 95(4): 231–251

    Article  Google Scholar 

  3. Gorodnitsky I F and Rao B D 1997 Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45(3): 600–616

    Article  Google Scholar 

  4. Malioutov D, Cetin M and Willsky A S 2005 A Sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 53(8): 3010–3022

    Article  MathSciNet  Google Scholar 

  5. Stoica P and Moses R L 1997 Introduction to spectral analysis. Upper Saddle River, N.J.: Prentice Hall

    MATH  Google Scholar 

  6. Cotter S F and Rao B D 2002 Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 50(3): 374–377

    Article  Google Scholar 

  7. Tropp J A 2006 Algorithms for simultaneous sparse approximation. Part II: Convex relaxation. Signal Process. 86(3): 589–602 [Sparse Approximations in Signal and Image Processing Sparse Approximations in Signal and Image Processing]

  8. Wipf D P and Rao B D 2007 An empirical bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 55(7): 3704–3716

    Article  MathSciNet  Google Scholar 

  9. Zhang Z and Rao B D 2011 Sparse signal recovery with temporally correlated source vectors using sparse bayesian learning. IEEE J. Sel. Topics Signal Process. 5(5): 912–926

    Article  Google Scholar 

  10. Elad M and Yavneh I (2009) A plurality of sparse representations is better than the sparsest one alone. IEEE Trans. Inf. Theory 55(10): 4701–4714

    Article  MathSciNet  Google Scholar 

  11. Ambat S K, Chatterjee S and K V S Hari 2013a Fusion of algorithms for compressed sensing. IEEE Trans. Signal Process. 61(14): 3699–3704

    Article  MathSciNet  Google Scholar 

  12. Ambat S K, Chatterjee S and K V S Hari 2013b Fusion of algorithms for compressed sensing. In: Acoustics, speech and signal processing (ICASSP), 2013 IEEE International Conferenceon, pp. 5860–5864

  13. Ambat S K, Chatterjee S and K V S Hari 2014 Progressive fusion of reconstruction algorithms for low latency applications in compressed sensing. Signal Process. 97:146–151

    Article  Google Scholar 

  14. Ambat S K, Saikat Chatterjee and K V S Hari 2012 fusion of greedy pursuits for compressed sensing signal reconstruction. In: 20th European Signal Processing Conference 2012 (EUSIPCO 2012), Bucharest, Romania

  15. Ambat S K, Saikat Chatterjee and K V S Hari 2014a A committee machine approach for compressed sensing signal reconstruction. IEEE Trans. Signal Process. 62(7): 1705–1717

    Article  MathSciNet  Google Scholar 

  16. Needell D and Tropp J A 2009 CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3): 301–321

    Article  MathSciNet  MATH  Google Scholar 

  17. Gribonval R, Rauhut H, Schnass K and Vandergheynst P 2008 Atoms of all channels, unite! average case analysis of multi-channel sparse recovery using greedy algorithms. J. Fourier Anal. Appl. 14(5): 655–687

    Article  MathSciNet  MATH  Google Scholar 

  18. van den Berg E and Friedlander M 2009 Probing the Pareto Frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2): 890–912

    Article  MathSciNet  MATH  Google Scholar 

  19. van den Berg E and Friedlander M P 2007 SPGL1: A solver for large-scale sparse reconstruction. http://www.cs.ubc.ca/labs/scl/spgl1

  20. Goldberger A L, Amaral L A N, Glass L, Hausdorff J M, Ivanov P C, Mark R G, Mietus J E, Moody G B, Peng C-K and Stanley H E 2000 PhysioBank, physiotoolkit, and physioNet: Components of a new research resource for complex physiologic signals. Circulation 101(23): e215–e220. Circulation electronic pages: http://circ.ahajournals.org/cgi/content/full/101/23/e215PMID:1085218; doi:10.1161/01.CIR.101.23.e215

  21. Ambat S K, Saikat Chatterjee and K V S Hari 2014b Progressive fusion of reconstruction algorithms for low latency applications in compressed sensing. Signal Process. 97: 146–151

    Article  Google Scholar 

  22. Dai W and Milenkovic O 2009 Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5): 2230–2249

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sooraj K Ambat.

Appendices

Appendix

Proof of lemma 1

The proof is inspired by proposition 3.5 by Needell and Tropp [16].

Define set S as the convex combination of all matrices, which are \(R+K\) sparse and have unit Frobenius norm.

$$\begin{aligned} S=\hbox {conv}\left\{ {\mathbf {X}}: \left\| {\mathbf {X}}\right\| _0 \le R+K, \left\| {\mathbf {X}}\right\| _F=1 \right\} . \end{aligned}$$

Using the relation \(\left\| {\mathbf {AX}}\right\| _F \le \sqrt{1+\delta _{R+K}} \left\| {\mathbf {X}}\right\| _F\), we get

$$\begin{aligned} {\mathop {{{\mathrm{arg\,max}}}}\limits _{{\mathbf {X}} \in S}} \left\| {\mathbf {AX}}\right\| _F \le&\sqrt{1+\delta _{R+K}}. \end{aligned}$$

Define

$$\begin{aligned} Q=\left\{ {\mathbf {X}}: \left\| {\mathbf {X}}\right\| _F + \frac{1}{\sqrt{R+K}} \left\| {\mathbf {X}}\right\| _{2,1} \le 1 \right\} . \end{aligned}$$

The lemma essentially claims that

$$\begin{aligned} {\mathop {{{\mathrm{arg\,max}}}}\limits _{{{\mathbf {X}} \in Q}}} \left\| {\mathbf {AX}}\right\| _F \le {\mathop {{{\mathrm{arg\,max}}}}\limits _{{{\mathbf {X}} \in S}}} \left\| {\mathbf {AX}}\right\| _F. \end{aligned}$$

To prove this, it is sufficient to ensure that \(Q \subset S\).

Consider a matrix \({\mathbf {X}} \in Q\). Partition the support of \({\mathbf {X}}\) into sets of size \(R+K\). Let set \({\mathcal {I}}_0\) contain the indices of the \(R+K\) rows of \({\mathbf {X}}\), which have largest row \(\ell _2\)-norm, breaking ties lexicographically. Let set \({\mathcal {I}}_1\) contain the indices of the next largest (row \(\ell _2\)-norm) \(R+K\) rows and so on. The final block \({\mathcal {I}}_J\) may have lesser than \(R+K\) components. This partition gives rise to the following decomposition:

$$\begin{aligned} {\mathbf {X}}={\mathbf {X}}|_{I_0}+\sum _{j=1}^{J}{\mathbf {X}}|_{I_j} = \lambda _0 {\mathbf {Y}}_0 + \sum _{j=1}^{J} \lambda _j {\mathbf {Y}}_j, \end{aligned}$$

where \(\displaystyle \lambda _j =\left\| {\mathbf {X}}|_{I_j}\right\| _F \quad \text {and} \quad {\mathbf {Y}}_j= \lambda _j^{-1} {\mathbf {X}}|_{I_j}.\)

By construction, each matrix \({\mathbf {Y}}_j\) belongs to S because it is \(R+K\) sparse and has unit Frobenius norm. We will show that \(\sum _{j} \lambda _j \le 1\). This implies that \({\mathbf {X}}\) can be written as a convex combination of matrices from the set S. As a result \({\mathbf {X}} \in S\). Therefore, \(Q\subset S\).

Fix some j in the range \(\{1,2, \ldots , J\}\). Then, \({\mathcal {I}}_j\) contains at most \(R+K\) elements and \({\mathcal {I}}_{j-1}\) contains exactly \(R+K\) elements. Therefore,

$$\begin{aligned} \lambda _j = \left\| {\mathbf {X}}|_{I_j}\right\| _F \le \sqrt{R+K} \left\| {\mathbf {X}}|_{I_j}\right\| _{2,\infty } \le \sqrt{R+K} \cdot \frac{1}{R+K} \left\| {\mathbf {X}}|_{I_{j-1}}\right\| _{2,1}. \end{aligned}$$

The last inequality holds because the row \(\ell _2\)-norm of \({\mathbf {X}}\) on the set \(I_{j-1}\) dominates its largest row \(\ell _2\)-norm in \({\mathcal {I}}_j\). Summing these relations, we get

$$\begin{aligned} \sum _{j=1}^{J} \lambda _j \le \frac{1}{\sqrt{R+K}} \sum _{j=1}^J \left\| {\mathbf {X}}|_{I_{j-1}}\right\| _{2,1} \le \frac{1}{\sqrt{R+K}} \left\| {\mathbf {X}}\right\| _{2,1}. \end{aligned}$$

Also, we have \(\lambda _0 = \left\| {\mathbf {X}}|_{I_0}\right\| _F \le \left\| {\mathbf {X}}\right\| _F\). Since \({\mathbf {X}} \in Q\), we conclude that

$$\begin{aligned} \sum _{j=0}^{J} \lambda _j \le \left[ \left\| {\mathbf {X}}\right\| _F + \frac{1}{\sqrt{R+K}} \left\| {\mathbf {X}}\right\| _{2,1}\right] \le 1. \end{aligned}$$

\(\square \)

Proof of lemma 2

Let \(\mathbf {y}^{(i)}\) denote the \(i\text {th}\) column of matrix \({\mathbf {Y}}\) and \(\mathbf {r}^{(i)}\) denote the \(i\text {th}\) column of matrix \({\mathbf {R}}\), \(i=1,2,\ldots L\). Then, we have from lemma 2 of [22]

$$\begin{aligned} \left( 1-\frac{\delta _{|{\mathcal {T}}_1|+|{\mathcal {T}}_2|}}{1-\delta _{|{\mathcal {T}}_1|+|{\mathcal {T}}_2|}} \right) \left\| {\mathbf {y}}^{(i)}\right\| _2&\le \left\| \mathbf {r}^{(i)}\right\| _2 \le \left\| {\mathbf {y}}^{(i)}\right\| _2. \end{aligned}$$

Summing the above relation, we obtain

$$\begin{aligned} \left( 1-\frac{\delta _{|{\mathcal {T}}_1|+|{\mathcal {T}}_2|}}{1-\delta _{|{\mathcal {T}}_1|+|{\mathcal {T}}_2|}} \right) \sum _{i=1}^{L}\left\| {\mathbf {y}}^{(i)}\right\| _2^2&\le \sum _{i=1}^L\left\| \mathbf {r}^{(i)}\right\| _2^2 \le \sum _{i=1}^L\left\| {\mathbf {y}}^{(i)}\right\| _2^2. \end{aligned}$$

Equivalently, we have

$$\begin{aligned} \left( 1-\frac{\delta _{|{\mathcal {T}}_1|+|{\mathcal {T}}_2|}}{1-\delta _{|{\mathcal {T}}_1|+|{\mathcal {T}}_2|}} \right) \left\| {\mathbf {Y}}\right\| _F&\le \left\| {\mathbf {R}}\right\| _F \le \left\| {\mathbf {Y}}\right\| _F. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deepa, K.G., Ambat, S.K. & Hari, K.V.S. Fusion of sparse reconstruction algorithms for multiple measurement vectors. Sādhanā 41, 1275–1287 (2016). https://doi.org/10.1007/s12046-016-0552-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12046-016-0552-1

Keywords

Navigation