Skip to main content

The Geometry of Compressed Sensing

  • Chapter
  • First Online:
Compressed Sensing & Sparse Filtering

Part of the book series: Signals and Communication Technology ((SCT))

Abstract

Most developments in compressed sensing have revolved around the exploitation of signal structures that can be expressed and understood most easily using a geometrical interpretation. This geometric point of view not only underlies many of the initial theoretical developments on which much of the theory of compressed sensing is built, but has also allowed ideas to be extended to much more general recovery problems and structures. A unifying framework is that of non-convex, low-dimensional constraint sets in which the signal to be recovered is assumed to reside. The sparse signal structure of traditional compressed sensing translates into a union of low dimensional subspaces, each subspace being spanned by a small number of the coordinate axes. The union of subspaces interpretation is readily generalised and many other recovery problems can be seen to fall into this setting. For example, instead of vector data, in many problems, data is more naturally expressed in matrix form (for example a video is often best represented in a pixel by time matrix). A powerful constraint on matrices are constraints on the matrix rank. For example, in low-rank matrix recovery, the goal is to reconstruct a low-rank matrix given only a subset of its entries. Importantly, low-rank matrices also lie in a union of subspaces structure, although now, there are infinitely many subspaces (though each of these is finite dimensional). Many other examples of union of subspaces signal models appear in applications, including sparse wavelet-tree structures (which form a subset of the general sparse model) and finite rate of innovations models, where we can have infinitely many infinite dimensional subspaces. In this chapter, I will provide an introduction to these and related geometrical concepts and will show how they can be used to (a) develop algorithms to recover signals with given structures and (b) allow theoretical results that characterise the performance of these algorithmic approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this chapter, we use two, somewhat different, meanings for the term vector. On the one hand, we call any one dimensional array of real or complex numbers a vector, this is the meaning used here. Below, we will introduce a more abstract definition of vectors as elements of some mathematical space. Which of these two definitions is appropriate at any one point in this chapter should be clear from the context.

  2. 2.

    A subset \(\mathcal{S }\subset \mathcal{H }\) is called closed if every sequence with elements in \(\mathcal{S }\) that converges to an element of \(\mathcal{H }\) has a limit in the subset \(\mathcal{S }\) itself.

  3. 3.

    That is, a signal whose Fourier transform \(\mathcal{X }(f)\) is assumed to be zero apart from the set \(S\subset [-B_N,\ B_N]\).

  4. 4.

    We assume here that we use the norm \(\sqrt{\sum x_i^2}\), though other Euclidean norms are treated with equal ease.

  5. 5.

    Note that for \(\delta <\frac{\Vert P_\mathcal{S }(\mathbf{x })\Vert }{\Vert \widetilde{\mathbf{e }}\Vert }\) and for \(\mu \) and \(\alpha \) as in the theorem, both, the numerator and the denominator in the iteration count are negative numbers, so that a decrease in delta leads to an increase in the required number of iterations. If we were to choose \(\delta \) such that the numerator becomes positive, we would get a negative number of iterations. This has to be interpreted as meaning that we actually don’t need to run the algorithm at all, as the associated estimate error is already achieved by the estimate \({\hat{\mathbf{x }}}=\mathbf{x }^0=\mathbf{0 }\).

References

  1. Nyquist H (1928) Certain topics in telegraph transmission theory. Trans AIEE 47:617–644

    Google Scholar 

  2. Shannon CA, Weaver W (1949) The mathematical theory of communication. University of Illinois Press, Urbana

    Google Scholar 

  3. Donoho DL (2006) For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829

    Article  MathSciNet  MATH  Google Scholar 

  4. Candès E, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Math 6(2):227–254

    Article  MathSciNet  MATH  Google Scholar 

  5. Candès E, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inform Theory 52(2):489–509

    Article  MathSciNet  MATH  Google Scholar 

  6. Candès E, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223

    Article  MATH  Google Scholar 

  7. Abernethy J, Bach F, Evgeniou T, Vert J-P (2006) Low-rank matrix factorization with attributes. arxiv:0611124v1

    Google Scholar 

  8. Recht B, Fazel M, Parrilo PA (2009) Guaranteed minimum-rank solution of linear matrix equations via nuclear norm minimization. Found Comput Math 9:717–772

    Article  MathSciNet  MATH  Google Scholar 

  9. Blumensath T (2010) Sampling and reconstructing signals from a union of linear subspaces. IEEE Trans Inf Theory 57(7):4660–4671

    Google Scholar 

  10. Rudin W (1976) Principles of mathematical analysis, 3rd edn. McGraw-Hill Higher Education, New York

    Google Scholar 

  11. Conway JB (1990) A course in functional analysis. Graduate texts in mathematics, 2nd edn. Springer, Berlin

    Google Scholar 

  12. Unser M (2000) Sampling-50 years after Shannon. Proc IEEE 88(4):569–587

    Article  Google Scholar 

  13. Landau HJ (1967) Necessary density conditions for sampling and interpolation of certain entire functions. Acta Math 117:37–52

    Article  MathSciNet  MATH  Google Scholar 

  14. Mishali M, Eldar YC (2009) Blind multi-band signal reconstruction: compressed sensing for analog signals. IEEE Trans Signal Process 57(3):993–1009

    Article  MathSciNet  Google Scholar 

  15. Vetterli M, Marziliano P, Blu T (2002) Sampling signals with finite rate of innovation. IEEE Trans Signal Process 50(6):1417–1428

    Article  MathSciNet  Google Scholar 

  16. Xu W, Hassibi B (to appear) Compressive sensing over the grassmann manifold: a unified geometric framework. IEEE Trans Inf Theory

    Google Scholar 

  17. Cands EJ (2006) The restricted isometry property and its implications for compressed sensing. Compte Rendus de l’Academie des Sciences, Serie I(346):589–592

    Google Scholar 

  18. Donoho DL (2006) High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete Comput Geom 35(4):617–652

    Article  MathSciNet  MATH  Google Scholar 

  19. Gruenbaum B (1968) Grassmann angles of convex polytopes. Acta Math 121:293–302

    Article  MathSciNet  MATH  Google Scholar 

  20. Gruenbaum B (2003) Convex polytopes. Graduate texts in mathematics, vol 221, 2nd edn. Springer-Verlag, New York

    Google Scholar 

  21. Blumensath T, Davies M (2008) Iterative thresholding for sparse approximations. J Fourier Anal Appl 14(5):629–654

    Article  MathSciNet  MATH  Google Scholar 

  22. Blumensath T, Davies M (2009) Iterative hard thresholding for compressed sensing. Appl Comput Harmon Anal 27(3):265–274

    Article  MathSciNet  MATH  Google Scholar 

  23. Qui K, Dogandzic A (2010) ECME thresholding methods for sparse signal reconstruction. arXiv:1004.4880v3

    Google Scholar 

  24. Cevher V, (2011) On accelerated hard thresholding methods for sparse approximation. EPFL Technical Report, February 17, 2011

    Google Scholar 

  25. Baraniuk RG (1999) Optimal tree approximation with wavelets. Wavelet Appl Sig Image Process VII 3813:196–207

    Article  Google Scholar 

  26. Goldfarb D, Ma S (2010) Convergence of fixed point continuation algorithms for matrix rank minimization. arXiv:09063499v3

    Google Scholar 

  27. Needell D, Tropp JA (2008) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321

    Article  MathSciNet  Google Scholar 

  28. Blumensath T (2010) Compressed sensing with nonlinear observations. Technical report. http://eprints.soton.ac.uk/164753

  29. Rudin W (1966) Real and complex analysis, McGraw-Hill, New York

    Google Scholar 

  30. Negahban S, Ravikumar P, Wainwright MJ, Yu B (2009) A unified framework for the analysis of regularized M-estimators. Advances in neural information processing systems, Vancouver, Canada

    Google Scholar 

  31. Bahmani S, Raj B, Boufounos P (2012) Greedy sparsity-constrained optimization. arXiv:1203.5483v1

    Google Scholar 

  32. Blumensath T (2012) Compressed sensing with nonlinear observations and related nonlinear optimisation problems. arXiv:1205.1650v1

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the UKs Engineering and Physical Science Research Council grants EP/J005444/1 and D000246/1 and a Research Fellowship from the School of Mathematics at the University of Southampton.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Blumensath .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Blumensath, T. (2014). The Geometry of Compressed Sensing. In: Carmi, A., Mihaylova, L., Godsill, S. (eds) Compressed Sensing & Sparse Filtering. Signals and Communication Technology. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38398-4_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38397-7

  • Online ISBN: 978-3-642-38398-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics