Advertisement

Sparse Representations

  • Bogdan Dumitrescu
  • Paul Irofti
Chapter

Abstract

Sparse representations using overcomplete dictionaries have found many signal processing applications. We present the main ways of formulating sparse approximation problems and discuss their advantages over the classical orthogonal transforms. The foremost difficulty is the computation of sparse representations, since it amounts to find the sparsest among the infinite number of solutions of an underdetermined linear system, a problem that has a combinatorial character. The most successful classes of algorithms are based on greedy approaches and convex relaxation. We describe in detail a representative algorithm from each class, namely Orthogonal Matching Pursuit and FISTA. In some circumstances, the algorithms are guaranteed to find the sparsest solution and we present sets of conditions that ensure their success. In preparation for stating the dictionary learning problem, we debate the advantages and drawbacks of learned dictionaries with respect to fixed ones. Since learning is based on training signals from the application at hand, adapted dictionaries have the potential of more faithful sparse representations, an advantage that overwhelms the necessity of (mainly off-line) extra computation.

References

  1. 11.
    A. Beck, M. Teboulle, Fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)MathSciNetCrossRefGoogle Scholar
  2. 12.
    S. Becker, J. Bobin, E.J. Candès, NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imag. Sci. 4(1), 1–39 (2011)MathSciNetCrossRefGoogle Scholar
  3. 13.
    T. Blumensath, M.E. Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harm. Anal. 27(3), 265–274 (2009)MathSciNetCrossRefGoogle Scholar
  4. 14.
    S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)CrossRefGoogle Scholar
  5. 17.
    A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)MathSciNetCrossRefGoogle Scholar
  6. 19.
    T. Cai L. Wang, Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Inf. Theory 57(7), 4680–4688 (2011)MathSciNetCrossRefGoogle Scholar
  7. 20.
    T. Cai, L. Wang, G. Xu, New bounds for restricted isometry constants. IEEE Trans. Inf. Theory 56(9), 4388–4394 (2010)MathSciNetCrossRefGoogle Scholar
  8. 21.
    T. Cai, L. Wang, G. Xu, Shifting inequality and recovery of sparse signals. IEEE Trans. Signal Proc. 58(3), 1300–1308 (2010)MathSciNetCrossRefGoogle Scholar
  9. 22.
    E.J. Candès, The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9–10), 589–592 (2008)MathSciNetCrossRefGoogle Scholar
  10. 23.
    E.J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)MathSciNetCrossRefGoogle Scholar
  11. 31.
    L.H. Chang, J.Y. Wu, An improved rip-based performance guarantee for sparse signal recovery via orthogonal matching pursuit. IEEE Trans. Inf. Theory 60(9), 5702–5715 (2014)MathSciNetCrossRefGoogle Scholar
  12. 33.
    R. Chartrand, Shrinkage mappings and their induced penalty functions, in ICASSP, Florence (2014), pp. 1026–1029Google Scholar
  13. 34.
    S. Chatterjee, M. Vehkapera, M. Skoglund, Projection-based and look-ahead strategies for atom selection. IEEE Trans. Signal Process. 60(2), 634–647 (2012)MathSciNetCrossRefGoogle Scholar
  14. 36.
    S. Chen, S.A. Billings, W. Luo, Orthogonal least squares methods and their application to non-linear system identification. Int. J. Control 50(5), 1873–1896 (1989)CrossRefGoogle Scholar
  15. 37.
    S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)MathSciNetCrossRefGoogle Scholar
  16. 43.
    W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)MathSciNetCrossRefGoogle Scholar
  17. 47.
    I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)MathSciNetCrossRefGoogle Scholar
  18. 48.
    M.A. Davenport, M.B. Wakin, Analysis of orthogonal matching pursuit using the restricted isometry property. IEEE Trans. Inf. Theory 56(9), 4395–4401 (2010)MathSciNetCrossRefGoogle Scholar
  19. 52.
    D.L. Donoho, M. Elad, Optimally sparse representation in general (non-orthogonal) dictionaries via 1 minimization. Proc. Natl. Acad. Sci. 100, 2197–2202 (2003)MathSciNetCrossRefGoogle Scholar
  20. 55.
    D.L. Donoho, A. Maleki, A. Montanari, Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. 106(45), 18914–18919 (2009)CrossRefGoogle Scholar
  21. 61.
    M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal Processing (Springer, Berlin, 2010)CrossRefGoogle Scholar
  22. 82.
    I.F. Gorodnitsky, B.D. Rao, Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45(3), 600–616 (1997)CrossRefGoogle Scholar
  23. 135.
    Q. Mo, Y. Shen, A remark on the restricted isometry property in orthogonal matching pursuit. IEEE Trans. Inf. Theory 58(6), 3654–3656 (2012)MathSciNetCrossRefGoogle Scholar
  24. 144.
    Y.C. Pati, R. Rezaiifar, P.S. Krishnaprasad, Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition, in 27th Asilomar Conference on Signals Systems Computers, vol. 1, November 1993, pp. 40–44Google Scholar
  25. 187.
    R. Tibshirani, Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. B 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  26. 188.
    M.E. Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)MathSciNetzbMATHGoogle Scholar
  27. 190.
    J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)MathSciNetCrossRefGoogle Scholar
  28. 205.
    D.P. Wipf, B.D. Rao, Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 52(8), 2153–2164 (2004)MathSciNetCrossRefGoogle Scholar
  29. 218.
    C.H. Zhang, Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)MathSciNetCrossRefGoogle Scholar
  30. 224.
    Z. Zhang, Y. Xu, J. Yang, X. Li, D. Zhang. A survey of sparse representation: algorithms and applications. IEEE Access 3, 490–530 (2015)CrossRefGoogle Scholar
  31. 227.
    H. Zou, T. Hastie, Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67(2), 301–320 (2005)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Bogdan Dumitrescu
    • 1
  • Paul Irofti
    • 2
  1. 1.Department of Automatic Control and Systems Engineering, Faculty of Automatic Control and ComputersUniversity Politehnica of BucharestBucharestRomania
  2. 2.Department of Computer Science, Faculty of Mathematics and Computer ScienceUniversity of BucharestBucharestRomania

Personalised recommendations