Advertisement

The Geometry of Compressed Sensing

  • Thomas Blumensath
Chapter
Part of the Signals and Communication Technology book series (SCT)

Abstract

Most developments in compressed sensing have revolved around the exploitation of signal structures that can be expressed and understood most easily using a geometrical interpretation. This geometric point of view not only underlies many of the initial theoretical developments on which much of the theory of compressed sensing is built, but has also allowed ideas to be extended to much more general recovery problems and structures. A unifying framework is that of non-convex, low-dimensional constraint sets in which the signal to be recovered is assumed to reside. The sparse signal structure of traditional compressed sensing translates into a union of low dimensional subspaces, each subspace being spanned by a small number of the coordinate axes. The union of subspaces interpretation is readily generalised and many other recovery problems can be seen to fall into this setting. For example, instead of vector data, in many problems, data is more naturally expressed in matrix form (for example a video is often best represented in a pixel by time matrix). A powerful constraint on matrices are constraints on the matrix rank. For example, in low-rank matrix recovery, the goal is to reconstruct a low-rank matrix given only a subset of its entries. Importantly, low-rank matrices also lie in a union of subspaces structure, although now, there are infinitely many subspaces (though each of these is finite dimensional). Many other examples of union of subspaces signal models appear in applications, including sparse wavelet-tree structures (which form a subset of the general sparse model) and finite rate of innovations models, where we can have infinitely many infinite dimensional subspaces. In this chapter, I will provide an introduction to these and related geometrical concepts and will show how they can be used to (a) develop algorithms to recover signals with given structures and (b) allow theoretical results that characterise the performance of these algorithmic approaches.

Notes

Acknowledgments

This work was supported in part by the UKs Engineering and Physical Science Research Council grants EP/J005444/1 and D000246/1 and a Research Fellowship from the School of Mathematics at the University of Southampton.

References

  1. 1.
    Nyquist H (1928) Certain topics in telegraph transmission theory. Trans AIEE 47:617–644Google Scholar
  2. 2.
    Shannon CA, Weaver W (1949) The mathematical theory of communication. University of Illinois Press, UrbanaGoogle Scholar
  3. 3.
    Donoho DL (2006) For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Candès E, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Math 6(2):227–254MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Candès E, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inform Theory 52(2):489–509MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Candès E, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223CrossRefMATHGoogle Scholar
  7. 7.
    Abernethy J, Bach F, Evgeniou T, Vert J-P (2006) Low-rank matrix factorization with attributes. arxiv:0611124v1Google Scholar
  8. 8.
    Recht B, Fazel M, Parrilo PA (2009) Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. Found Comput Math 9:717–772MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Blumensath T (2010) Sampling and reconstructing signals from a union of linear subspaces. IEEE Trans Inf Theory 57(7):4660–4671Google Scholar
  10. 10.
    Rudin W (1976) Principles of mathematical analysis, 3rd edn. McGraw-Hill Higher Education, New YorkGoogle Scholar
  11. 11.
    Conway JB (1990) A course in functional analysis. Graduate texts in mathematics, 2nd edn. Springer, BerlinGoogle Scholar
  12. 12.
    Unser M (2000) Sampling-50 years after Shannon. Proc IEEE 88(4):569–587CrossRefGoogle Scholar
  13. 13.
    Landau HJ (1967) Necessary density conditions for sampling and interpolation of certain entire functions. Acta Math 117:37–52MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Mishali M, Eldar YC (2009) Blind multi-band signal reconstruction: compressed sensing for analog signals. IEEE Trans Signal Process 57(3):993–1009MathSciNetCrossRefGoogle Scholar
  15. 15.
    Vetterli M, Marziliano P, Blu T (2002) Sampling signals with finite rate of innovation. IEEE Trans Signal Process 50(6):1417–1428MathSciNetCrossRefGoogle Scholar
  16. 16.
    Xu W, Hassibi B (to appear) Compressive sensing over the grassmann manifold: a unified geometric framework. IEEE Trans Inf TheoryGoogle Scholar
  17. 17.
    Cands EJ (2006) The restricted isometry property and its implications for compressed sensing. Compte Rendus de l’Academie des Sciences, Serie I(346):589–592Google Scholar
  18. 18.
    Donoho DL (2006) High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete Comput Geom 35(4):617–652MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Gruenbaum B (1968) Grassmann angles of convex polytopes. Acta Math 121:293–302MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Gruenbaum B (2003) Convex polytopes. Graduate texts in mathematics, vol 221, 2nd edn. Springer-Verlag, New YorkGoogle Scholar
  21. 21.
    Blumensath T, Davies M (2008) Iterative thresholding for sparse approximations. J Fourier Anal Appl 14(5):629–654MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Blumensath T, Davies M (2009) Iterative hard thresholding for compressed sensing. Appl Comput Harmon Anal 27(3):265–274MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Qui K, Dogandzic A (2010) ECME thresholding methods for sparse signal reconstruction. arXiv:1004.4880v3Google Scholar
  24. 24.
    Cevher V, (2011) On accelerated hard thresholding methods for sparse approximation. EPFL Technical Report, February 17, 2011Google Scholar
  25. 25.
    Baraniuk RG (1999) Optimal tree approximation with wavelets. Wavelet Appl Sig Image Process VII 3813:196–207CrossRefGoogle Scholar
  26. 26.
    Goldfarb D, Ma S (2010) Convergence of fixed point continuation algorithms for matrix rank minimization. arXiv:09063499v3Google Scholar
  27. 27.
    Needell D, Tropp JA (2008) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321MathSciNetCrossRefGoogle Scholar
  28. 28.
    Blumensath T (2010) Compressed sensing with nonlinear observations. Technical report. http://eprints.soton.ac.uk/164753
  29. 29.
    Rudin W (1966) Real and complex analysis, McGraw-Hill, New YorkGoogle Scholar
  30. 30.
    Negahban S, Ravikumar P, Wainwright MJ, Yu B (2009) A unified framework for the analysis of regularized M-estimators. Advances in neural information processing systems, Vancouver, CanadaGoogle Scholar
  31. 31.
    Bahmani S, Raj B, Boufounos P (2012) Greedy sparsity-constrained optimization. arXiv:1203.5483v1Google Scholar
  32. 32.
    Blumensath T (2012) Compressed sensing with nonlinear observations and related nonlinear optimisation problems. arXiv:1205.1650v1Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.ISVR Signal Processing and Control GroupUniversity of SouthamptonSouthamptonUK

Personalised recommendations