Compressed Sensing and Sparse Recovery

  • Robert Qiu
  • Michael Wicks
Chapter

Abstract

The central mathematical tool for algorithm analysis and development is the concentration of measure for random matrices. This chapter is motivated to provide applications examples for the theory developed in Part I. We emphasize the central role of random matrices.

Compressed sensing is a recent revolution. It is built upon the observation that sparsity plays a central role in the structure of a vector. The unexpected message here is that for a sparse signal, the relevant “information” is much less that what we thought previously. As a result, to recover the sparse signal, the required samples are much less than what is required by the traditional Shannon’s sampling theorem.

Keywords

Entropy Covariance Radar Hull Autocorrelation 

Bibliography

  1. 12.
    G. Bennett, “Probability inequalities for the sum of independent random variables,” Journal of the American Statistical Association, vol. 57, no. 297, pp. 33–45, 1962.MATHCrossRefGoogle Scholar
  2. 27.
    M. Ledoux and M. Talagrand, Probability in Banach spaces. Springer, 1991.Google Scholar
  3. 28.
    J. A. Tropp, J. N. Laska, M. F. Duarte, J. K. Romberg, and R. G. Baraniuk, “Beyond nyquist: Efficient sampling of sparse bandlimited signals,” Information Theory, IEEE Transactions on, vol. 56, no. 1, pp. 520–544, 2010.MathSciNetCrossRefGoogle Scholar
  4. 29.
    J. Nelson, “Johnson-lindenstrauss notes,” tech. rep., Technical report, MIT-CSAIL, Available at http://web.mit.edu/minilek/www/jl_notes.pdf, 2010.
  5. 30.
    H. Rauhut, “Compressive sensing and structured random matrices,” Theoretical foundations and numerical methods for sparse recovery, vol. 9, pp. 1–92, 2010.MathSciNetGoogle Scholar
  6. 31.
    F. Krahmer, S. Mendelson, and H. Rauhut, “Suprema of chaos processes and the restricted isometry property,” arXiv preprint arXiv:1207.0235, 2012.Google Scholar
  7. 56.
    V. De la Peña and E. Giné, Decoupling: from dependence to independence. Springer Verlag, 1999.Google Scholar
  8. 61.
    D. L. Hanson and F. T. Wright, “A bound on tail probabilities for quadratic forms in independent random variables,” The Annals of Mathematical Statistics, pp. 1079–1083, 1971.Google Scholar
  9. 62.
    S. Boucheron, G. Lugosi, and P. Massart, “Concentration inequalities using the entropy method,” The Annals of Probability, vol. 31, no. 3, pp. 1583–1614, 2003.MathSciNetMATHCrossRefGoogle Scholar
  10. 81.
    M. Talagrand, The generic chaining: upper and lower bounds of stochastic processes. Springer Verlag, 2005.Google Scholar
  11. 83.
    R. M. Dudley, “The sizes of compact subsets of hilbert space and continuity of gaussian processes,” J. Funct. Anal, vol. 1, no. 3, pp. 290–330, 1967.MathSciNetMATHCrossRefGoogle Scholar
  12. 84.
    X. Fernique, “Régularité des trajectoires des fonctions aléatoires gaussiennes,” Ecole d’Eté de Probabilités de Saint-Flour IV-1974, pp. 1–96, 1975.Google Scholar
  13. 85.
    M. Talagrand, “Regularity of gaussian processes,” Acta mathematica, vol. 159, no. 1, pp. 99–149, 1987.MathSciNetMATHCrossRefGoogle Scholar
  14. 152.
    G. Pisier, The volume of convex bodies and Banach space geometry, vol. 94. Cambridge Univ Pr, 1999.Google Scholar
  15. 231.
    M. Talagrand, “New concentration inequalities in product spaces,” Inventiones Mathematicae, vol. 126, no. 3, pp. 505–563, 1996.MathSciNetMATHCrossRefGoogle Scholar
  16. 391.
    G. E. Pfander, H. Rauhut, and J. A. Tropp, “The restricted isometry property for time–frequency structured random matrices,” Probability Theory and Related Fields, pp. 1–31, 2011.Google Scholar
  17. 392.
    T. T. Cai, L. Wang, and G. Xu, “Shifting inequality and recovery of sparse signals,” Signal Processing, IEEE Transactions on, vol. 58, no. 3, pp. 1300–1308, 2010.MathSciNetCrossRefGoogle Scholar
  18. 395.
    S. Foucart, “A note on guaranteed sparse recovery via 1-minimization,” Applied and Computational Harmonic Analysis, vol. 29, no. 1, pp. 97–103, 2010.MathSciNetMATHCrossRefGoogle Scholar
  19. 396.
    S. Foucart, “Sparse recovery algorithms: sufficient conditions in terms of restricted isometry constants,” Approximation Theory XIII: San Antonio 2010, pp. 65–77, 2012.CrossRefGoogle Scholar
  20. 397.
    D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, 2009.MathSciNetMATHCrossRefGoogle Scholar
  21. 398.
    T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265–274, 2009.MathSciNetMATHCrossRefGoogle Scholar
  22. 399.
    S. Foucart, “Hard thresholding pursuit: an algorithm for compressive sensing,” SIAM Journal on Numerical Analysis, vol. 49, no. 6, pp. 2543–2563, 2011.MathSciNetMATHCrossRefGoogle Scholar
  23. 400.
    R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A Simple Proof of the Restricted Isometry Property for Random Matrices.” Submitted for publication, January 2007.Google Scholar
  24. 402.
    M. Rudelson and R. Vershynin, “On sparse reconstruction from fourier and gaussian measurements,” Communications on Pure and Applied Mathematics, vol. 61, no. 8, pp. 1025–1045, 2008.MathSciNetMATHCrossRefGoogle Scholar
  25. 403.
    H. Rauhut, J. Romberg, and J. A. Tropp, “Restricted isometries for partial random circulant matrices,” Applied and Computational Harmonic Analysis, vol. 32, no. 2, pp. 242–254, 2012.MathSciNetMATHCrossRefGoogle Scholar
  26. 404.
    G. E. Pfander and H. Rauhut, “Sparsity in time-frequency representations,” Journal of Fourier Analysis and Applications, vol. 16, no. 2, pp. 233–260, 2010.MathSciNetMATHCrossRefGoogle Scholar
  27. 405.
    G. Pfander, H. Rauhut, and J. Tanner, “Identification of Matrices having a Sparse Representation,” in Preprint, 2007.Google Scholar
  28. 406.
    E. Candes and T. Tao, “Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?,” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006.MathSciNetCrossRefGoogle Scholar
  29. 407.
    S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, “Uniform uncertainty principle for bernoulli and subgaussian ensembles,” Constructive Approximation, vol. 28, no. 3, pp. 277–289, 2008.MathSciNetMATHCrossRefGoogle Scholar
  30. 408.
    A. Cohen, W. Dahmen, and R. DeVore, “Compressed Sensing and Best k-Term Approximation,” in Submitted for publication, July, 2006.Google Scholar
  31. 410.
    A. Y. Garnaev and E. D. Gluskin, “The widths of a euclidean ball,” in Dokl. Akad. Nauk SSSR, vol. 277, pp. 1048–1052, 1984.Google Scholar
  32. 411.
    W. B. Johnson and J. Lindenstrauss, “Extensions of lipschitz mappings into a hilbert space,” Contemporary mathematics, vol. 26, no. 189–206, p. 1, 1984.Google Scholar
  33. 412.
    F. Krahmer and R. Ward, “New and improved johnson-lindenstrauss embeddings via the restricted isometry property,” SIAM Journal on Mathematical Analysis, vol. 43, no. 3, pp. 1269–1281, 2011.MathSciNetMATHCrossRefGoogle Scholar
  34. 413.
    N. Alon, “Problems and results in extremal combinatorics-i,” Discrete Mathematics, vol. 273, no. 1, pp. 31–53, 2003.MathSciNetMATHCrossRefGoogle Scholar
  35. 414.
    N. Ailon and E. Liberty, “An almost optimal unrestricted fast johnson-lindenstrauss transform,” in Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 185–191, SIAM, 2011.Google Scholar
  36. 415.
    D. Achlioptas, “Database-friendly random projections,” in Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pp. 274–281, ACM, 2001.Google Scholar
  37. 416.
    D. Achlioptas, “Database-friendly random projections: Johnson-lindenstrauss with binary coins,” Journal of computer and System Sciences, vol. 66, no. 4, pp. 671–687, 2003.MathSciNetMATHCrossRefGoogle Scholar
  38. 417.
    N. Ailon and B. Chazelle, “Approximate nearest neighbors and the fast johnson-lindenstrauss transform,” in Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pp. 557–563, ACM, 2006.Google Scholar
  39. 418.
    N. Ailon and B. Chazelle, “The fast johnson-lindenstrauss transform and approximate nearest neighbors,” SIAM Journal on Computing, vol. 39, no. 1, pp. 302–322, 2009.MathSciNetMATHCrossRefGoogle Scholar
  40. 419.
    G. Lorentz, M. Golitschek, and Y. Makovoz, “Constructive approximation, volume 304 of grundlehren math. wiss,” 1996.Google Scholar
  41. 420.
    R. I. Arriaga and S. Vempala, “An algorithmic theory of learning: Robust concepts and random projection,” Machine Learning, vol. 63, no. 2, pp. 161–182, 2006.MATHCrossRefGoogle Scholar
  42. 426.
    J. Matoušek, “On variants of the johnson–lindenstrauss lemma,” Random Structures & Algorithms, vol. 33, no. 2, pp. 142–156, 2008.MathSciNetMATHCrossRefGoogle Scholar
  43. 427.
    M. Rudelson and R. Vershynin, “Geometric approach to error-correcting codes and reconstruction of signals,” International Mathematics Research Notices, vol. 2005, no. 64, p. 4019, 2005.Google Scholar
  44. 428.
    J. Vybíral, “A variant of the johnson–lindenstrauss lemma for circulant matrices,” Journal of Functional Analysis, vol. 260, no. 4, pp. 1096–1105, 2011.MathSciNetMATHCrossRefGoogle Scholar
  45. 429.
    A. Hinrichs and J. Vybíral, “Johnson-lindenstrauss lemma for circulant matrices,” Random Structures & Algorithms, vol. 39, no. 3, pp. 391–398, 2011.MathSciNetMATHGoogle Scholar
  46. 430.
    H. Rauhut, K. Schnass, and P. Vandergheynst, “Compressed Sensing and Redundant Dictionaries,” IEEE Transactions on Information Theory, vol. 54, no. 5, pp. 2210–2219, 2008.MathSciNetCrossRefGoogle Scholar
  47. 431.
    W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak, “Compressed channel sensing: A new approach to estimating sparse multipath channels,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1058–1076, 2010.CrossRefGoogle Scholar
  48. 432.
    J. Chiu and L. Demanet, “Matrix probing and its conditioning,” SIAM Journal on Numerical Analysis, vol. 50, no. 1, pp. 171–193, 2012.MathSciNetMATHCrossRefGoogle Scholar
  49. 433.
    G. E. Pfander, “Gabor frames in finite dimensions,” Finite Frames, pp. 193–239, 2012.Google Scholar
  50. 434.
    J. Haupt, W. U. Bajwa, G. Raz, and R. Nowak, “Toeplitz compressed sensing matrices with applications to sparse channel estimation,” Information Theory, IEEE Transactions on, vol. 56, no. 11, pp. 5862–5875, 2010.MathSciNetCrossRefGoogle Scholar
  51. 435.
    K. Gröchenig, Foundations of time-frequency analysis. Birkhäuser Boston, 2000.Google Scholar
  52. 436.
    F. Krahmer, G. E. Pfander, and P. Rashkov, “Uncertainty in time–frequency representations on finite abelian groups and applications,” Applied and Computational Harmonic Analysis, vol. 25, no. 2, pp. 209–225, 2008.MathSciNetMATHCrossRefGoogle Scholar
  53. 437.
    J. Lawrence, G. E. Pfander, and D. Walnut, “Linear independence of gabor systems in finite dimensional vector spaces,” Journal of Fourier Analysis and Applications, vol. 11, no. 6, pp. 715–726, 2005.MathSciNetMATHCrossRefGoogle Scholar
  54. 438.
    B. M. Sanandaji, T. L. Vincent, and M. B. Wakin, “Concentration of measure inequalities for compressive toeplitz matrices with applications to detection and system identification,” in Decision and Control (CDC), 2010 49th IEEE Conference on, pp. 2922–2929, IEEE, 2010.Google Scholar
  55. 439.
    B. M. Sanandaji, T. L. Vincent, and M. B. Wakin, “Concentration of measure inequalities for toeplitz matrices with applications,” arXiv preprint arXiv:1112.1968, 2011.Google Scholar
  56. 440.
    B. M. Sanandaji, M. B. Wakin, and T. L. Vincent, “Observability with random observations,” arXiv preprint arXiv:1211.4077, 2012.Google Scholar
  57. 441.
    M. Meckes, “On the spectral norm of a random toeplitz matrix,” Electron. Comm. Probab, vol. 12, pp. 315–325, 2007.MathSciNetMATHGoogle Scholar
  58. 442.
    R. Adamczak, “A few remarks on the operator norm of random toeplitz matrices,” Journal of Theoretical Probability, vol. 23, no. 1, pp. 85–108, 2010.MathSciNetMATHCrossRefGoogle Scholar
  59. 443.
    R. Calderbank, S. Howard, and S. Jafarpour, “Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property,” Selected Topics in Signal Processing, IEEE Journal of, vol. 4, no. 2, pp. 358–374, 2010.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Robert Qiu
    • 1
  • Michael Wicks
    • 2
  1. 1.Tennessee Technological UniversityCookevilleUSA
  2. 2.UticaUSA

Personalised recommendations