Skip to main content

An Introduction to Compressed Sensing

  • Chapter
  • First Online:

Part of the book series: Applied and Numerical Harmonic Analysis ((ANHA))

Abstract

Compressed sensing and many research activities associated with it can be seen as a framework for signal processing of low-complexity structures. A cornerstone of the underlying theory is the study of inverse problems with linear or nonlinear measurements. Whether it is sparsity, low-rankness, or other familiar notions of low complexity, the theory addresses necessary and sufficient conditions behind the measurement process to guarantee signal reconstruction with efficient algorithms. This includes consideration of robustness to measurement noise and stability with respect to signal model inaccuracies. This introduction aims to provide an overall view of some of the most important results in this direction. After discussing various examples of low-complexity signal models, two approaches to linear inverse problems are introduced which, respectively, focus on the recovery of individual signals and recovery of all low-complexity signals simultaneously. In particular, we focus on the former setting, giving rise to so-called nonuniform signal recovery problems. We discuss different necessary and sufficient conditions for stable and robust signal reconstruction using convex optimization methods. Appealing to concepts from non-asymptotic random matrix theory, we outline how certain classes of random sensing matrices, which fully govern the measurement process, satisfy certain sufficient conditions for signal recovery. Finally, we review some of the most prominent algorithms for signal recovery proposed in the literature.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Technically, a Hilbert space is an inner product space in which every Cauchy sequence converges to a point in the same space.

  2. 2.

    A Steinhaus random variable is a complex random variable distributed uniformly on the complex unit circle.

  3. 3.

    A convex body is a compact convex set with non-empty interior.

  4. 4.

    This idea also extends to signals living on low-dimensional manifolds.

  5. 5.

    This follows from the classical bound \({\left( \frac{d}{k}\right) }^k\le \left( {\begin{array}{c}d\\ k\end{array}}\right) \le {\left( \frac{ed}{k}\right) }^k\).

  6. 6.

    To define the sparse vectors on \(\mathbb {C}^d\), simply replace \({\left\{ \pm \mathbf {e}_n\right\} }\) by \({\left\{ \pm \mathbf {e}_n,\pm i\mathbf {e}_n\right\} }\).

  7. 7.

    We assume that the fundamental frequencies of each complex exponential are integer multiples of the frequency resolution \(f_\mathrm {s} / d\) where \(f_\mathrm {s}\) denotes the sampling rate.

  8. 8.

    A set K is a closed star domain if K is closed, and \(tK \subseteq K \;\forall t \in [0, 1]\).

  9. 9.

    The Grassmann manifold or Grassmannian \(\mathcal {G}(d,s)\) is an abstract Riemannian manifold containing all s-dimensional subspaces of \(\mathbb {R}^d\).

  10. 10.

    A cone \(\mathcal {C}\subset \mathbb {R}^d\) is called polyhedral if it can be expressed as the intersection of finitely many half-spaces.

  11. 11.

    This follows from the fact that \(\ker (\mathbf {A})\) only has a single face on which \({\varPi _{\ker (\mathbf {A})}}\) projects every point \(\mathbf {x}\in \mathbb {R}^d\), namely, \(\ker (\mathbf {A})\) itself.

  12. 12.

    In fact, as we will briefly discuss in Sect. 7, such random constructions are often characterized by more well-behaved scaling constants.

  13. 13.

    The definition can easily be extended to the case where \(\mathcal {D}\subset \mathbb {R}^n\), but we restrict our discussion to the scalar case here.

  14. 14.

    This avoids another issue regarding the so-called instance optimality of pairs \((\mathbf {A},\Delta )\) where \(\Delta :\mathbb {C}^m \rightarrow \mathbb {C}^d\) denotes an arbitrary reconstruction algorithm (see [54, Chap. 11] for details).

  15. 15.

    We are careful not to call this an algorithm class as optimization programs are technically just descriptions of problems which still require specialized algorithms such as interior-point methods to actually solve them.

  16. 16.

    Hence the name shrinkage thresholding.

  17. 17.

    Note that while we used a second-order approximation of h in Eq. (22), we did so by approximating the Hessian \(\nabla ^2 h(\mathbf {x})\) as a scaled identity matrix, thereby ignoring the true second-order information of h.

  18. 18.

    Note that this choice amounts to \(\mathbf {A}\) satisfying the \(\ell _2\)-robust null space property (cf. Definition 10) of order \(k\) with constants \(\rho < 1/3\) and \(\tau > 0\) [54, Theorem 6.13].

References

  1. S.I. Adalbjörnsson, A. Jakobsson, M.G. Christensen. Estimating multiple pitches using block sparsity, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (May 2013), pp. 6220–6224

    Google Scholar 

  2. R. Adamczak, R. Latała, A.E. Litvak, A. Pajor, N. Tomczak-Jaegermann, Geometry of log-concave ensembles of random matrices and approximate reconstruction. C. R. Math. 349(13), 783–786 (2011)

    MathSciNet  MATH  Google Scholar 

  3. R. Adamczak, A.E. Litvak, A. Pajor, N. Tomczak-Jaegermann, Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling. Constr. Approx. 34(1), 61–88 (2011)

    MathSciNet  MATH  Google Scholar 

  4. D. Amelunxen, M. Lotz, M.B. McCoy, J.A. Tropp, Living on the edge: phase transitions in convex programs with random data. Inf. Inference 3(3), 224–294 (2014)

    MathSciNet  MATH  Google Scholar 

  5. U. Ayaz, S. Dirksen, H. Rauhut, Uniform recovery of fusion frame structured sparse signals. Appl. Comput. Harmon. Anal. 41(2), 341–361 (2016)

    MathSciNet  MATH  Google Scholar 

  6. W.U. Bajwa, J.D. Haupt, G.M. Raz, S.J. Wright, R.D. Nowak, Toeplitz-structured compressed sensing matrices, in 2007 IEEE/SP 14th Workshop on Statistical Signal Processing (Aug. 2007), pp. 294–298

    Google Scholar 

  7. A.S. Bandeira, M.E. Lewis, D.G. Mixon, Discrete Uncertainty Principles and Sparse Signal Processing. J. Fourier Anal. Appl. 24(4), 935–956 (2018)

    MathSciNet  MATH  Google Scholar 

  8. R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)

    MathSciNet  MATH  Google Scholar 

  9. A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  10. S. Becker, J. Bobin, E.J. Candès, Nesta: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4, 1–39 (2011)

    MathSciNet  MATH  Google Scholar 

  11. J. Bennett, S. Lanning, The netflix prize (2007)

    Google Scholar 

  12. R. Berinde, A.C. Gilbert, P. Indyk, H. Karloff, M.J. Strauss, Combining geometry and combinatorics: a unified approach to sparse signal recovery, in 2008 46th Annual Allerton Conference on Communication, Control, and Computing (Sept. 2008), pp. 798–805

    Google Scholar 

  13. B.N. Bhaskar, G. Tang, B. Recht, Atomic norm denoising with applications to line spectral estimation. IEEE Trans. Signal Process. 61(23), 5987–5999 (2011)

    MathSciNet  MATH  Google Scholar 

  14. H. Boche, Compressed Sensing and its Applications (Springer Science+Business Media, New York, 2015)

    Google Scholar 

  15. P. Boufounos, G. Kutyniok, H. Rauhut, Sparse recovery from combined fusion frame measurements. IEEE Trans. Inf. Theory 57(6), 3864–3876 (2011)

    MathSciNet  MATH  Google Scholar 

  16. P.T. Boufounos, L. Jacques, F. Krahmer, R. Saab, Quantization and compressive sensing, in Compressed Sensing and its Applications: MATHEON Workshop 2013, Applied and Numerical Harmonic Analysis, ed. by H. Boche, R. Calderbank, G. Kutyniok, J. Vybíral (Springer International Publishing, Cham, 2015), pp. 193–237

    MATH  Google Scholar 

  17. J. Bourgain, An Improved Estimate in the Restricted Isometry Problem, in Geometric Aspects of Functional Analysis, vol. 2116, ed. by B. Klartag, E. Milman (Springer International Publishing, Cham, 2014), pp. 65–70

    Google Scholar 

  18. S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, 2004)

    Google Scholar 

  19. E. Candes, J. Romberg, l1-magic: recovery of sparse signals via convex programming, vol. 4 (2005), p. 14. www.acm.caltech.edu/l1magic/downloads/l1magic.pdf

  20. E. Candes, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)

    MathSciNet  MATH  Google Scholar 

  21. E.J. Candès, The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9), 589–592 (2008)

    MathSciNet  MATH  Google Scholar 

  22. E.J. Candes, D.L. Donoho, Curvelets-a surprisingly effective nonadaptive representation for objects with edges, in Curves and Surfaces Fitting, ed. by L.L. Schumaker, A. Cohen, C. Rabut (Vanderbilt University Press, Nashville, TN, 1999), p. 16

    Google Scholar 

  23. E.J. Candès, D.L. Donoho, New tight frames of curvelets and optimal representations of objects with piecewise c2 singularities. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 57(2), 219–266 (2004)

    MATH  Google Scholar 

  24. E.J. Candes, Y. Plan, Near-ideal model selection by \(\ell _1\) minimization. Ann. Stat. 37, 2145–2177 (2009)

    MATH  Google Scholar 

  25. E.J. Candès, J.K. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    MathSciNet  MATH  Google Scholar 

  26. E.J. Candès, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    MathSciNet  MATH  Google Scholar 

  27. E.J. Candes, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    MathSciNet  MATH  Google Scholar 

  28. E.J. Candès, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)

    MathSciNet  MATH  Google Scholar 

  29. A.Y. Carmi, L. Mihaylova, S.J. Godsill, Compressed Sensing & Sparse Filtering (Springer, 2016)

    Google Scholar 

  30. P.G. Casazza, G. Kutyniok, F. Philipp, Introduction to finite frame theory, in Finite Frames (Springer, 2013), pp. 1–53

    Google Scholar 

  31. V. Chandrasekaran, B. Recht, P.A. Parrilo, A.S. Willsky, The convex geometry of linear inverse problems. Found. Comput. Math. 12(6), 805–849 (2012)

    MathSciNet  MATH  Google Scholar 

  32. M. Cheraghchi, V. Guruswami, A. Velingker, Restricted isometry of Fourier matrices and list decodability of random linear codes. SIAM J. Comput. 42(5), 1888–1914 (2013)

    MathSciNet  MATH  Google Scholar 

  33. A. Cohen, W. Dahmen, R. Devore, Compressed sensing and best k-term approximation. J. Am. Math. Soc. 211–231 (2009)

    Google Scholar 

  34. R. Coifman, F. Geshwind, Y. Meyer, Noiselets. Appl. Comput. Harmon. Anal. 10(1), 27–44 (2001)

    MathSciNet  MATH  Google Scholar 

  35. W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55, 2230–2249 (2009)

    MathSciNet  MATH  Google Scholar 

  36. S. Dasgupta, A. Gupta, An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms 22(1), 60–65 (2003)

    MathSciNet  MATH  Google Scholar 

  37. R.A. DeVore, Nonlinear approximation. Acta Numer. 7, 51–150 (1998)

    MATH  Google Scholar 

  38. S. Diamond, S. Boyd, Cvxpy: a python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(1), 2909–2913 (2016)

    MathSciNet  MATH  Google Scholar 

  39. S. Dirksen, G. Lecué, H. Rauhut, On the gap between restricted isometry properties and sparse recovery conditions. IEEE Trans. Inf. Theory 64(8), 5478–5487 (2018)

    MathSciNet  MATH  Google Scholar 

  40. D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    MathSciNet  MATH  Google Scholar 

  41. D.L. Donoho, M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _1\) minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003)

    MathSciNet  MATH  Google Scholar 

  42. D.L. Donoho, M. Elad, V.N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006)

    MathSciNet  MATH  Google Scholar 

  43. D.L. Donoho, I. Johnstone, A. Montanari, Accurate prediction of phase transitions in compressed sensing via a connection to minimax denoising. IEEE Trans. Inf. Theory 59, 3396–3433 (2013)

    MathSciNet  MATH  Google Scholar 

  44. D.L. Donoho, A. Maleki, A. Montanari, Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. U. S. A. 106(45), 18914–9 (2009)

    Google Scholar 

  45. D.L. Donoho, J. Tanner, Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. Ser. A Math. Phys. Eng. Sci. 367 (1906), 4273–4293 (2009)

    Google Scholar 

  46. M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. (Springer, New York, 2010). OCLC: ocn646114450

    Google Scholar 

  47. Y.C. Eldar, G. Kutyniok (eds.), Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012)

    Google Scholar 

  48. E. Elhamifar, R. Vidal, Sparse subspace clustering, in 2009 IEEE Conference on Computer Vision and Pattern Recognition (June 2009), pp. 2790–2797

    Google Scholar 

  49. H.G. Feichtinger, T. Strohmer, Gabor Analysis and Algorithms: Theory and Applications (Springer Science & Business Media, 2012)

    Google Scholar 

  50. M. Fornasier, S. Peter, An overview on algorithms for sparse recovery, in Sparse Reconstruction and Compressive Sensing in Remote Sensing, ed. by X. Zhu, R. Bamler (Springer, June 2015), p. 76

    Google Scholar 

  51. M. Fornasier, H. Rauhut, Compressive sensing, in Handbook of Mathematical Methods in Imaging, ed. by O. Scherzer (Springer, New York, 2011), pp. 187–228. https://doi.org/10.1007/978-0-387-92920-0_6

  52. S. Foucart, Flavors of compressive sensing, in Approximation Theory XV: San Antonio 2016, ed. by G.E. Fasshauer, L.L. Schumaker (Springer International Publishing, Cham, 2017), pp. 61–104

    Google Scholar 

  53. S. Foucart, A. Pajor, H. Rauhut, T. Ullrich, The Gelfand widths of \(\ell _p\)-balls for \(0<p\le 1\). J. Complex. 26(6), 629–640 (2010)

    MATH  Google Scholar 

  54. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Birkhäuser, Basel, 2013)

    MATH  Google Scholar 

  55. R. Foygel, L.W. Mackey, Corrupted sensing: novel guarantees for separating structured signals. IEEE Trans. Inf. Theory 60, 1223–1247 (2014)

    MathSciNet  MATH  Google Scholar 

  56. D. Goldberg, D. Nichols, B.M. Oki, D. Terry, Using collaborative filtering to weave an information tapestry. Commun. ACM 35(12), 61–70 (1992)

    Google Scholar 

  57. Y. Gordon, On milman’s inequality and random subspaces which escape through a mesh in \(\mathbb{R}^n\), in Geometric Aspects of Functional Analysis, ed. by J. Lindenstrauss, V.D. Milman (Springer, Berlin, 1988), pp. 84–106

    Google Scholar 

  58. J. Gouveia, P.A. Parrilo, R.R. Thomas, Theta bodies for polynomial ideals. SIAM J. Optim. 20, 2097–2118 (2010)

    MathSciNet  MATH  Google Scholar 

  59. M. Grant, S. Boyd, Y. Ye, CVX: Matlab software for disciplined convex programming (2008)

    Google Scholar 

  60. Z. Han, H. Li, W. Yin, Compressive Sensing for Wireless Networks (Cambridge University Press, 2013)

    Google Scholar 

  61. I. Haviv, O. Regev, The restricted isometry property of subsampled fourier matrices, in Geometric Aspects of Functional Analysis, Lecture Notes in Mathematics (Springer, Cham, 2017), pp. 163–179

    Google Scholar 

  62. W.B. Johnson, J. Lindenstrauss, Extensions of lipschitz mappings into a hilbert space. Contemp. Math. 26(189–206), 1 (1984)

    MathSciNet  MATH  Google Scholar 

  63. V. Koltchinskii, Oracle inequalities in empirical risk minimization and sparse recovery problems: École d’été de probabilités de Saint-Flour XXXVIII-2008. Number 2033 in Lecture notes in mathematics. (Springer, Berlin, 2011). OCLC: ocn733246860

    Google Scholar 

  64. F. Krahmer, S. Mendelson, H. Rauhut, Suprema of chaos processes and the restricted isometry property. Commun. Pure Appl. Math. 67(11), 1877–1904 (2014)

    MathSciNet  MATH  Google Scholar 

  65. G. Kutyniok, D. Labate (eds.), Shearlets: multiscale analysis for multivariate data. Applied and Numerical Harmonic Analysis (Birkhäuser, New York, 2012). OCLC: ocn794844320

    Google Scholar 

  66. C. Liaw, A. Mehrabian, Y. Plan, R. Vershynin, A simple tool for bounding the deviation of random matrices on geometric sets (2016). CoRR, arXiv:1603.00897

  67. G.G. Lorentz, M.V. Golitschek, Y. Makovoz, Constructive Approximation: Advanced Problems (Springer, Berlin, 2005). OCLC: 903339623

    Google Scholar 

  68. S.G. Mallat, A Wavelet Tour of Signal Processing: The Sparse Way, 3rd edn. (Elsevier/Academic Press, Amsterdam, 2009)

    Google Scholar 

  69. C.A. Metzler, A. Maleki, R.G. Baraniuk, From denoising to compressed sensing. IEEE Trans. Inf. Theory 62, 5117–5144 (2016)

    MathSciNet  MATH  Google Scholar 

  70. M. Mishali, Y.C. Eldar, Blind multiband signal reconstruction: compressed sensing for analog signals. IEEE Trans. Signal Process. 57(3), 993–1009 (2009)

    MathSciNet  MATH  Google Scholar 

  71. Q. Mo, A sharp restricted isometry constant bound of orthogonal matching pursuit (2015). CoRR, arXiv:1501.01708

  72. B.K. Natarajan, Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    MathSciNet  MATH  Google Scholar 

  73. S. Nathan, A. Shraibman, Rank, trace-norm and max-norm, in COLT (2005)

    Google Scholar 

  74. J. Nelson, E. Price, M. Wootters, New constructions of rip matrices with fast multiplication and fewer rows, in Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics (2014), pp. 1515–1528

    Google Scholar 

  75. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, 1st edn. (Springer Publishing Company, Incorporated, 2014)

    Google Scholar 

  76. S. Oymak, B. Hassibi, New null space results and recovery thresholds for matrix rank minimization (Nov. 2010). arXiv:1011.6326 [cs, math, stat]

  77. N. Parikh, S.P. Boyd, Proximal algorithms. Found. Trends Optim. 1, 127–239 (2014)

    Google Scholar 

  78. F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed dna microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275–285 (2008)

    Google Scholar 

  79. Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)

    MathSciNet  MATH  Google Scholar 

  80. Y. Plan, R. Vershynin, The generalized Lasso with non-linear observations. IEEE Trans. Inf. Theory 62(3), 1528–1537 (2016)

    MathSciNet  MATH  Google Scholar 

  81. Y.L. Polo, Y. Wang, A. Pandharipande, G. Leus, Compressive wide-band spectrum sensing, in 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (Apr. 2009), pp. 2337–2340

    Google Scholar 

  82. S. Rangan, Generalized approximate message passing for estimation with random linear mixing, in2011 IEEE International Symposium on Information Theory Proceedings (2011), pp. 2168–2172

    Google Scholar 

  83. S. Rangan, P. Schniter, A.K. Fletcher, Vector approximate message passing, in 2017 IEEE International Symposium on Information Theory (ISIT) (2017), pp. 1588–1592

    Google Scholar 

  84. N.S. Rao, B. Recht, R.D. Nowak, Universal measurement bounds for structured sparse signal recovery, in AISTATS (2012)

    Google Scholar 

  85. H. Rauhut, Circulant and Toeplitz matrices in compressed sensing, in SPARS 09-Signal Processing with Adaptive Sparse Structured Representations (Saint Malo, France, Apr. 2009), p. 7

    Google Scholar 

  86. H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory 54(5), 2210–2219 (2008)

    MathSciNet  MATH  Google Scholar 

  87. H. Rauhut, R. Ward, Sparse recovery for spherical harmonic expansions, in Proceedings of the SampTA 2011 (2011)

    Google Scholar 

  88. R.T. Rockafellar, Convex Analysis (Princeton University Press, 2015)

    Google Scholar 

  89. M. Rudelson, R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61(8), 1025–1045 (2008)

    MathSciNet  MATH  Google Scholar 

  90. S. Sarvotham, D. Baron, R.G. Baraniuk, Measurements vs. bits: compressed sensing meets information theory, in Allerton Conference on Communication, Control and Computing (2006)

    Google Scholar 

  91. M. Stojnic, \(\ell _1\) optimization and its various thresholds in compressed sensing, in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing (2010), pp. 3910–3913

    Google Scholar 

  92. G. Tang, B.N. Bhaskar, P. Shah, B. Recht, Compressed sensing off the grid. IEEE Trans. Inf. Theory 59(11), 7465–7490 (2013)

    MathSciNet  MATH  Google Scholar 

  93. R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, K. Knight, Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67(1), 91–108 (2005)

    MathSciNet  MATH  Google Scholar 

  94. R.J. Tibshirani, The lasso problem and uniqueness (2012)

    Google Scholar 

  95. A.M. Tillmann, M.E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 60, 1248–1259 (2014)

    MathSciNet  MATH  Google Scholar 

  96. J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)

    MathSciNet  MATH  Google Scholar 

  97. E. van den Berg, M.P. Friedlander, Spgl1: a solver for large-scale sparse reconstruction (2007)

    Google Scholar 

  98. E. van den Berg, M.P. Friedlander, Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)

    MathSciNet  MATH  Google Scholar 

  99. R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, in Compressed Sensing, Theory and Applications (Cambridge University Press, Cambridge, 2012), pp. 210–268

    Google Scholar 

  100. R. Vershynin, Estimation in High Dimensions: A Geometric Perspective (Springer International Publishing, Cham, 2015), pp. 3–66

    Google Scholar 

  101. L. Welch, Lower bounds on the maximum cross correlation of signals (corresp.). IEEE Trans. Inf. Theory 20(3), 397–399 (1974)

    Google Scholar 

  102. J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, Y. Ma, Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Google Scholar 

  103. S.J. Wright, R.D. Nowak, M.A.T. Figueiredo, Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2008)

    MathSciNet  MATH  Google Scholar 

  104. H. Zhang, W. Yin, L. Cheng, Necessary and sufficient conditions of solution uniqueness in 1-norm minimization. J. Optim. Theory Appl. 164, 109–122 (2015)

    MathSciNet  MATH  Google Scholar 

  105. Y. Zhang, J. Yang, W. Yin, Yall1: your algorithms for l1 (2011). http://yall1.blogs.rice.edu

Download references

Acknowledgements

We would like to thank the anonymous reviewers and contributors to this book for their invaluable comments regarding this introduction.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arash Behboodi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Koep, N., Behboodi, A., Mathar, R. (2019). An Introduction to Compressed Sensing. In: Boche, H., Caire, G., Calderbank, R., Kutyniok, G., Mathar, R., Petersen, P. (eds) Compressed Sensing and Its Applications. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-73074-5_1

Download citation

Publish with us

Policies and ethics