Skip to main content
Log in

A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Z. Ben-Haim, Y. Eldar, M. Elad, Coherence-based performance guarantees for estimating a sparse vector under random noise. IEEE Trans. Signal Process. 58(10), 5030–5043 (2010). doi:10.1109/TSP.2010.2052460

    Article  MathSciNet  Google Scholar 

  2. G. Bennett, Probability inequalities for the sum of independent random variables. J. Am. Stat. Assoc. 57(297), 33–45 (1962)

    Article  MATH  Google Scholar 

  3. P. Bofill, M. Zibulevsky, Underdetermined blind source separation using sparse representations. Signal Process. 81(11), 2353–2362 (2001). doi:10.1016/S0165-1684(01)00120-7

    Article  MATH  Google Scholar 

  4. E.J. Candès, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006). doi:10.1002/cpa.20124

    Article  MathSciNet  MATH  Google Scholar 

  5. A. Castrodad, Z. Xing, J.B. Greer, E. Bosch, L. Carin, G. Sapiro, Learning discriminative sparse representations for modeling, source separation, and mapping of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 49(11), 4263–4281 (2011). doi:10.1109/TGRS.2011.2163822

    Article  Google Scholar 

  6. S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  7. D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). doi:10.1109/TIT.2006.871582

    Article  MathSciNet  MATH  Google Scholar 

  8. D. Donoho, X. Huo, Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001). doi:10.1109/18.959265

    Article  MathSciNet  MATH  Google Scholar 

  9. D. Donoho, M. Elad, V. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52(1), 6–18 (2006). doi:10.1109/TIT.2005.860430

    Article  MathSciNet  MATH  Google Scholar 

  10. D.L. Donoho, M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003). doi:10.1073/pnas.0437847100. http://www.pnas.org/content/100/5/2197.full.pdf

  11. M.F. Duarte, M.A. Davenport, D. Takbar, J.N. Laska, T. Sun, K.F. Kelly, R.G. Baraniuk, Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 25(2), 83–91 (2008). doi:10.1109/MSP.2007.914730

    Article  Google Scholar 

  12. B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, Least angle regression. Ann. Stat. 32(2), 407–499 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  13. M. Elad, Sparse and Redundant Representations (Springer, New York, 2010). doi:10.1007/978-1-4419-7011-4

    Book  MATH  Google Scholar 

  14. M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15(12), 3736–3745 (2006). doi:10.1109/TIP.2006.881969

    Article  MathSciNet  Google Scholar 

  15. Y.C. Eldar, G. Kutyniok (eds.), Compressed Sensing Theory and Applications (Cambridge University Press, Cambridge, 2012)

    Google Scholar 

  16. M. Emadi, K. Sadeghi, DOA estimation of multi-reflected known signals in compact arrays. EEE Trans. Aerosp. Electron. Syst. 49(3), 1920–1931 (2013). doi:10.1109/TAES.2013.6558028

    Article  Google Scholar 

  17. M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007). doi:10.1109/JSTSP.2007.910281

    Article  Google Scholar 

  18. G. Golub, C. Van Loan, Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences (Johns Hopkins University Press, Baltimore, Ann Arbor, MI, 1996)

  19. S.H. Hsieh, C.S. Lu, S.C. Pei, Fast omp: reformulating omp via iteratively refining l2-norm solutions, in 2012 IEEE Statistical Signal Processing Workshop (SSP), 2012, pp. 189–192. doi:10.1109/SSP.2012.6319656

  20. Z.M. Liu, Z.T. Huang, Y.Y. Zhou, Direction-of-arrival estimation of wideband signals via covariance matrix sparse representation. IEEE Trans. Signal Process. 59(9), 4256–4270 (2011). doi:10.1109/TSP.2011.2159214

    Article  MathSciNet  Google Scholar 

  21. D. Malioutov, M. Cetin, A. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 53(8), 3010–3022 (2005). doi:10.1109/TSP.2005.850882

    Article  MathSciNet  MATH  Google Scholar 

  22. S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993). doi:10.1109/78.258082

    Article  MATH  Google Scholar 

  23. F. Marvasti, A. Amini, F. Haddadi, M. Soltanolkotabi, B.H. Khalaj, A. Aldroubi, S. Sanei, J. Chambers, A unified approach to sparse signal processing. EURASIP J. Adv. Signal Process. 2012, 44 (2012)

    Article  Google Scholar 

  24. E. Miandji, J. Kronander, J. Unger, Compressive image reconstruction in reduced union of subspaces. Comput. Graph. Forum 34(2), 33–44 (2015). doi:10.1111/cgf.12539

    Article  Google Scholar 

  25. D. Needell, J. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009). doi:10.1016/j.acha.2008.07.002

    Article  MathSciNet  MATH  Google Scholar 

  26. D. Needell, R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 4(2), 310–316 (2010). doi:10.1109/JSTSP.2010.2042412

    Article  Google Scholar 

  27. Y. Pati, R. Rezaiifar, P.S. Krishnaprasad, Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition, in Conference Record of the Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, 1993, pp. 40–44. doi:10.1109/ACSSC.1993.342465

  28. G. Pope, Compressive sensing: a summary of reconstruction algorithms, Master’s thesis, ETH Zürich, 2009

  29. R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1994)

    MathSciNet  MATH  Google Scholar 

  30. J. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004). doi:10.1109/TIT.2004.834793

    Article  MathSciNet  MATH  Google Scholar 

  31. J. Tropp, Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory 52(3), 1030–1051 (2006). doi:10.1109/TIT.2005.864420

    Article  MathSciNet  MATH  Google Scholar 

  32. E. van den Berg, M.P. Friedlander, Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2009). doi:10.1137/080714488

    Article  MathSciNet  MATH  Google Scholar 

  33. S. Wright, R. Nowak, M. Figueiredo, Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009). doi:10.1109/TSP.2009.2016892

    Article  MathSciNet  Google Scholar 

  34. J. Yang, J. Wright, T.S. Huang, Y. Ma, Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010). doi:10.1109/TIP.2010.2050625

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Mohammad Emadi or Ehsan Miandji.

Appendix

Appendix

Proof

(proof of Lemma 1) Expanding \(\varGamma _j\), we can show that

$$\begin{aligned} \varGamma _j&= |\langle \mathbf {A}_j,\mathbf {A}\mathbf {s}+\mathbf {w}\rangle | \nonumber \\&= \left| \sum _{m=1}^{M}\mathbf {A}_{m,j}\left( \sum _{n=1}^{N}\mathbf {A}_{m,n}\mathbf {s}_n+\mathbf {w}_m\right) \right| \nonumber \\&=\left| \sum _{n=1}^{N}\left\{ \sum _{m=1}^{M}\mathbf {A}_{m,j}\mathbf {A}_{m,n}\mathbf {s}_n + \frac{1}{N}\sum _{m=1}^{M}\mathbf {A}_{m,j}\mathbf {w}_m\right\} \right| . \end{aligned}$$
(30)

Using (5), we have that

$$\begin{aligned} \varGamma _j=\left| \sum _{n=1}^{N}\left\{ \mu _{j,n}\mathbf {s}_n+\frac{1}{N}\langle \mathbf {A}_j,\mathbf {w}\rangle \right\} \right| = \left| \sum \limits _{n=1}^{N}\mathbf {x}_n\right| . \end{aligned}$$
(31)

As mentioned in Sect. 1, we assume that the elements of the sparse vector \(\mathbf {s}\) are centered random variables. Hence, the elements of \(\mathbf {s}\) are either zero or zero-mean random variables, implying that \(\mathrm {E}\{\mathbf {s}_n\}=0\) for all \(n=1,\dots ,N\). Together with the fact that \(\mathrm {E}\{\mathbf {w}\}=0\), we have

$$\begin{aligned} \mathrm {E}\{\mathbf {x}_n\} = \mu _{j,n}\mathrm {E}\{\mathbf {s}_n\} + N^{-1}\mathrm {E}\{\langle \mathbf {A}_j,\mathbf {w}\rangle \} = 0, \end{aligned}$$
(32)

for all \(n=1,\dots ,N\). According to Bernstein’s inequality [2], if \(\mathbf {x}_1,\dots , \mathbf {x}_N\) are independent real random variables with mean zero, where \(\mathrm {E}\left\{ \mathbf {x}_n^2\right\} \le \nu \), and \(\mathrm {Pr}\{|\mathbf {x}_n| < c\}=1\), then

$$\begin{aligned}&\mathrm {Pr}\left\{ \left| \sum \limits _{n=1}^N\mathbf {x}_n\right| \ge \xi \right\} \le 2\;\mathrm {exp}\left( \frac{-\xi ^2}{2\left( \sum \limits _{n=1}^{N}\mathrm {E}\left\{ \mathbf {x}_n^2\right\} +c\xi /3\right) }\right) \nonumber \\&\quad \le 2\;\mathrm {exp}\left( \frac{-\xi ^2}{2(N\nu +c\xi /3)}\right) , \end{aligned}$$
(33)

where (33) follows using (12). This completes the proof.

Proof

(proof of Lemma 2) Equation (13) follows trivially from the triangle inequality. For (14), we have

$$\begin{aligned} \mathrm {E}\left\{ \mathbf {x}_n^2\right\}= & {} \frac{1}{N} \sum _{n=1}^{N} \mathrm {E}\left\{ \mathbf {x}_n^2\right\} \end{aligned}$$
(34)
$$\begin{aligned}= & {} \frac{1}{N} \sum \limits _{n=1}^{N} \mathrm {E}\left\{ \mu _{j,n}^2 \mathbf {s}_n^2 + \frac{1}{N^2}\langle \mathbf {A}_j,\mathbf {w}\rangle ^2 + \frac{2}{N} \mu _{j,n}\mathbf {s}_n\langle \mathbf {A}_j,\mathbf {w}\rangle \right\} \end{aligned}$$
(35)
$$\begin{aligned}= & {} \frac{1}{N} \sum \limits _{n=1}^{N} \mu _{j,n}^2 \mathrm {E}\left\{ \mathbf {s}_n^2\right\} + \frac{1}{N^2}\mathrm {E}\left\{ \langle \mathbf {A}_j,\mathbf {w}\rangle ^2\right\} + \underbrace{\frac{2}{N} \mathrm {E}\left\{ \mu _{j,n}\mathbf {s}_n\langle \mathbf {A}_j,\mathbf {w}\rangle \right\} }_0\qquad \quad \end{aligned}$$
(36)
$$\begin{aligned}\le & {} \frac{\tau }{N} \mu _\mathrm {max}^2 \mathbf {s}_\mathrm {max}^2 + \frac{1}{N^2} \mathrm {E}\left\{ \langle \mathbf {A}_j,\mathbf {w}\rangle ^2\right\} , \end{aligned}$$
(37)

where the last term in (36) is zero since \(\mathrm {E}\{\mathbf {w}\}=0\), which implies \(\mathrm {E}\{\langle \mathbf {A}_j,\mathbf {w}\rangle \}=0\). Moreover, we have

$$\begin{aligned} \mathrm {E}\left\{ \langle \mathbf {A}_j,\mathbf {w}\rangle ^2\right\}&= \mathrm {E}\left\{ \left( \sum _{m=1}^{M}\mathbf {A}_{m,j}\mathbf {w}_m\right) \left( \sum _{m=1}^{M}\mathbf {A}_{m,j}\mathbf {w}_m\right) \right\} \nonumber \\&= \sum _{m=1}^{M} \mathbf {A}_{m,j}\mathbf {A}_{m,j} \mathrm {E}\{\mathbf {w}_m\mathbf {w}_m\} = \sigma ^2. \end{aligned}$$
(38)

Combining (37) and (38) completes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Emadi, M., Miandji, E. & Unger, J. A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence. Circuits Syst Signal Process 37, 1562–1574 (2018). https://doi.org/10.1007/s00034-017-0602-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-017-0602-x

Keywords

Navigation