Skip to main content

The Fundamentals of Compressed Sensing

  • Chapter
  • First Online:
Sparse Representation, Modeling and Learning in Visual Recognition

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

Abstract

In this chapter, some basic concepts about the compressed sensing are given. First, we briefly review the knowledge about the Shannon-Nyquist Sampling Theorem. Then, we give some basic knowledge about compressed sensing and sparse representation, such as, the relation between norms, incoherence condition, RIP condition, equivalence of and norms, and so on. Third, it gives some basic information about the sparse property. Lastly, it gives brief information about some well-known sparse convex optimization methods such as subgradient method, greedy method, Bayesian method, and augmented Lagrangian method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Achlioptas, D.: Database-friendly random projections: Johnson-Lindenstrauss with binary coins. J. Comput. Syst. Sci. 66(4), 671–687 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  2. Baraniuk, R., Davenport, M., DeVore, R., Wakin, M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  4. Boyd, S., Xiao, L., Mutapcic, A.: Subgradient Methods. Lecture, Stanford University, Autumn Quarter 54(1), 48–61 (2003)

    Google Scholar 

  5. Candes, E.J.: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 346(9), 589–592 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  6. Candes, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  7. Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  8. Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  10. Cohen, A., Dahmen, W., DeVore, R.: Instance optimal decoding by thresholding in compressed sensing. Technical Report DTIC Document (2008)

    Google Scholar 

  11. Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  12. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    Article  MATH  Google Scholar 

  13. Davenport, M.A.: Random observations on random observations: Sparse signal acquisition and processing. Ph.D. thesis. Citeseer (2010)

    Google Scholar 

  14. Davenport, M.A., Duarte, M.F., Eldar, Y.C., Kutyniok, G.: Introduction to compressed sensing 93 (2011)

    Google Scholar 

  15. Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _1\)-minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  16. Donoho, D.L., Huo, X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  17. Donoho, D.L., Stark, P.B.: Uncertainty principles and signal recovery. SIAM J. Appl. Math. 49(3), 906–931 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  18. Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, New York (2010)

    Book  Google Scholar 

  19. Eldar, Y.C., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)

    Book  Google Scholar 

  20. Goldstein, T., Osher, S.: The split Bregman method for \(\ell _1\)-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  21. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  22. Haupt, J., Nowak, R.: Signal reconstruction from noisy random projections. IEEE Trans. Inf. Theory 52(9), 4036–4048 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  23. Johnson, W.B., Lindenstrauss, J.: Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math. 26(189–206), 1 (1984)

    MathSciNet  Google Scholar 

  24. Lee, H.N.: Introduction to Compressed Sensing. Lecture Notes. Springer (2011)

    Google Scholar 

  25. Li, P., Hastie, T.J., Church, K.W.: Very sparse random projections. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2006)

    Google Scholar 

  26. Muthukrishnan, S.: Data Streams: Algorithms and Applications. Now Publishers Inc. (2005)

    Google Scholar 

  27. Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 53(12), 93–100 (2010)

    Article  Google Scholar 

  28. Shannon, C.E.: Communication in the presence of noise. IEEE Proc. IRE 37(1), 10–21 (1949)

    Article  MathSciNet  Google Scholar 

  29. Sharon, Y., Wright, J., Ma, Y.: Computation and relaxation of conditions for equivalence between \(\ell _1\) and \(\ell _0\) minimization. IEEE Trans. Inf. Theory 5 (2007)

    Google Scholar 

  30. Wang, H., Nie, F., Huang, H.: Multi-view clustering and feature learning via structured sparsity. In: ICML (2013)

    Google Scholar 

  31. Yanning Shen, J.F., Li, H.: Exact reconstruction analysis of log-sum minimization for compressed sensing. IEEE Signal Process. Lett. 20(12), 1223–1226 (2013)

    Article  Google Scholar 

  32. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  33. Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: ECCV. Springer (2012)

    Google Scholar 

  34. Zhou Zhou, K.L., Fang, J.: Bayesian compressive sensing using normal product priors. IEEE Signal Process. Lett. 22(5), 583–587 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Cheng .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag London

About this chapter

Cite this chapter

Cheng, H. (2015). The Fundamentals of Compressed Sensing. In: Sparse Representation, Modeling and Learning in Visual Recognition. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-6714-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-6714-3_2

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-6713-6

  • Online ISBN: 978-1-4471-6714-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics