Advertisement

Foundations of Computational Mathematics

, Volume 16, Issue 5, pp 1263–1323 | Cite as

Generalized Sampling and Infinite-Dimensional Compressed Sensing

  • Ben Adcock
  • Anders C. Hansen
Article

Abstract

We introduce and analyze a framework and corresponding method for compressed sensing in infinite dimensions. This extends the existing theory from finite-dimensional vector spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary by demonstrating that existing finite-dimensional techniques are ill suited for solving a number of key problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. A conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. Central to this work is the introduction of two novel concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize an infinite-dimensional problem.

Keywords

Compressed sensing Hilbert spaces Generalized sampling  Uneven sections 

Mathematics Subject Classification

94A20 47A99 42C40 15B52 

Notes

Acknowledgments

The authors would like to thank Akram Aldroubi, Emmanuel Candès, Ron DeVore, David Donoho, Karlheinz Gröchenig, Gerd Teschke, Joel Tropp, Martin Vetterli, Christopher White, Pengchong Yan and Özgür Yilmaz for useful discussions and comments. They would also like to thank Clarice Poon for helping to improve several of the arguments in the proofs, Bogdan Roman for producing the example in Sect. 7.2 and the anonymous referees for their many useful comments and suggestions.

References

  1. 1.
    B. Adcock. Infinite-dimensional \(\ell ^1\) minimization and function approximation from pointwise data. arXiv:1503.02352, 2015.
  2. 2.
    B. Adcock and A. C. Hansen. A generalized sampling theorem for stable reconstructions in arbitrary bases. J. Fourier Anal. Appl., 18(4):685–716, 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    B. Adcock and A. C. Hansen. Stable reconstructions in Hilbert spaces and the resolution of the Gibbs phenomenon. Appl. Comput. Harmon. Anal., 32(3):357–388, 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    B. Adcock, A. C. Hansen, E. Herrholz, and G. Teschke. Generalized sampling: extension to frames and inverse and ill-posed problems. Inverse Problems, 29(1):015008, 2013.MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    B. Adcock, A. C. Hansen, and C. Poon. Beyond consistent reconstructions: optimality and sharp bounds for generalized sampling, and application to the uniform resampling problem. SIAM J. Math. Anal., 45(5):3114–3131, 2013.MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    B. Adcock, A. C. Hansen, and C. Poon. On optimal wavelet reconstructions from Fourier samples: linearity and universality of the stable sampling rate. Appl. Comput. Harmon. Anal., 36(3):387–415, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    B. Adcock, A. C. Hansen, C. Poon, and B. Roman. Breaking the coherence barrier: a new theory for compressed sensing. arXiv:1302.0561, 2014.
  8. 8.
    B. Adcock, A. C. Hansen, B. Roman, and G. Teschke. Generalized sampling: stable reconstructions, inverse problems and compressed sensing over the continuum. Advances in Imaging and Electron Physics, 182:187–279, 2014.CrossRefGoogle Scholar
  9. 9.
    B. Adcock, A. C. Hansen, and A. Shadrin. A stability barrier for reconstructions from Fourier samples. SIAM J. Numer. Anal., 52(1):125–139, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    A. Aldroubi. Oblique projections in atomic spaces. Proc. Amer. Math. Soc., 124(7):2051–2060, 1996.MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    A. Averbuch, R. R. Coifman, D. L. Donoho, M. Israeli, and Y. Shkolnisky. A framework for discrete integral transformations. I. The pseudopolar Fourier transform. SIAM J. Sci. Comput., 30(2):764–784, 2008.MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    A. Averbuch, R. R. Coifman, D. L. Donoho, M. Israeli, Y. Shkolnisky, and I. Sedelnikov. A framework for discrete integral transformations. II. The 2D discrete Radon transform. SIAM J. Sci. Comput., 30(2):785–803, 2008.MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    A. Bastounis and A. C. Hansen. On the absence of the RIP in real-world applications of compressed sensing and the RIP in levels. arXiv:1411.4449, 2014.
  14. 14.
    T. Blu, P. L. Dragotti, M. Vetterli, P. Marziliano, and L. Coulout. Sparse sampling of signal innovations. IEEE Signal Process. Mag., 25(2):31–40, 2008.CrossRefGoogle Scholar
  15. 15.
    A. Bourrier, M. E. Davies, T. Peleg, P. Pérez, and R. Gribonval. Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems. IEEE Trans. Inform. Theory, 60(12):7928–7946, 2014.MathSciNetCrossRefGoogle Scholar
  16. 16.
    E. J. Candès. An introduction to compressive sensing. IEEE Signal Process. Mag., 25(2):21–30, 2008.CrossRefGoogle Scholar
  17. 17.
    E. J. Candès and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. Comm. Pure Appl. Math. (to appear), 67(6):906–956, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    E. J. Candes and Y. Plan. A Probabilistic and RIPless Theory of Compressed Sensing. IEEE Trans. Inform. Theory, 57(11):7235–7254, 2011.MathSciNetCrossRefGoogle Scholar
  19. 19.
    E. J. Candès and J. Romberg. Sparsity and incoherence in compressive sampling. Inverse Problems, 23(3):969–985, 2007.MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    E. J. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  21. 21.
    E. J. Candès and T. Tao. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406–5425, 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Y. Chi, L. L. Scharf, A. Pezeshki, and A. Calderbank. Sensitivity to basis mismatch in compressed sensing. IEEE Trans. Signal Process., 59(5):2182–2195, 2011.MathSciNetCrossRefGoogle Scholar
  23. 23.
    A. Cohen, W. Dahmen, and R. DeVore. Compressed sensing and best \(k\)-term approximation. J. Amer. Math. Soc., 22(1):211–231, 2009.MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    M. A. Davenport, M. F. Duarte, Y. C. Eldar, and G. Kutyniok. Introduction to compressed sensing. In Compressed Sensing: Theory and Applications. Cambridge University Press, 2011.Google Scholar
  25. 25.
    D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289–1306, 2006.MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    D. L. Donoho and J. Tanner. Counting faces of randomly projected polytopes when the projection radically lowers dimension. J. Amer. Math. Soc., 22(1):1–53, 2009.MathSciNetzbMATHCrossRefGoogle Scholar
  27. 27.
    P. L. Dragotti, M. Vetterli, and T. Blu. Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang–Fix. IEEE Trans. Signal Process., 55(5):1741–1757, 2007.MathSciNetCrossRefGoogle Scholar
  28. 28.
    I. Ekeland and T. Turnbull. Infinite-dimensional optimization and convexity. Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 1983.Google Scholar
  29. 29.
    Y. Eldar. Sampling with arbitrary sampling and reconstruction spaces and oblique dual frame vectors. J. Fourier Anal. Appl., 9(1):77–96, 2003.Google Scholar
  30. 30.
    Y. C. Eldar. Compressed sensing of analog signals in shift-invariant spaces. IEEE Trans. Signal Process., 57(8):2986–2997, 2009.MathSciNetCrossRefGoogle Scholar
  31. 31.
    Y. C. Eldar and T. Michaeli. Beyond Bandlimited Sampling. IEEE Signal Process. Mag., 26(3):48–68, 2009.MathSciNetCrossRefGoogle Scholar
  32. 32.
    A. Fannjiang and W. Liao. Coherence pattern-guided compressive sensing with unresolved grids. SIAM J. Imaging Sci., 5:179–202, 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    M. Fornasier and H. Rauhut. Compressive sensing. In Handbook of Mathematical Methods in Imaging, pages 187–228. Springer, 2011.Google Scholar
  34. 34.
    S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Birkhauser, 2013.zbMATHCrossRefGoogle Scholar
  35. 35.
    K. Gröchenig, Z. Rzeszotnik, and T. Strohmer. Quantitative estimates for the finite section method and Banach algebras of matrices. Integr Equat Oper Th., 67(2):183–202, 2011.Google Scholar
  36. 36.
    D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory, 57(3):1548–1566, 2011.MathSciNetCrossRefGoogle Scholar
  37. 37.
    D. Gross, F. Krahmer, and R. Kueng. A partial derandomization of phaselift using spherical designs. arXiv:1310.2267, 2014.
  38. 38.
    M. Guerquin-Kern, M. Häberlin, K. P. Pruessmann, and M. Unser. A fast wavelet-based reconstruction method for Magnetic Resonance Imaging. IEEE Trans. Med. Imaging, 30(9):1649–1660, 2011.CrossRefGoogle Scholar
  39. 39.
    M. Guerquin-Kern, L. Lejeune, K. P. Pruessmann, and M. Unser. Realistic analytical phantoms for parallel Magnetic Resonance Imaging. IEEE Trans. Med. Imaging, 31(3):626–636, 2012.CrossRefGoogle Scholar
  40. 40.
    A. C. Hansen. On the approximation of spectra of linear operators on Hilbert spaces. J. Funct. Anal., 254(8):2092–2126, 2008.MathSciNetzbMATHCrossRefGoogle Scholar
  41. 41.
    A. C. Hansen. On the solvability complexity index, the \(n\)-pseudospectrum and approximations of spectra of operators. J. Amer. Math. Soc., 24(1):81–124, 2011.MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    E. Herrholz and G. Teschke. Compressive sensing principles and iterative sparse recovery for inverse and ill-posed problems. Inverse Problems, 26(12):125012, 2010.MathSciNetzbMATHCrossRefGoogle Scholar
  43. 43.
    T. Hrycak and K. Gröchenig. Pseudospectral Fourier reconstruction with the modified inverse polynomial reconstruction method. J. Comput. Phys., 229(3):933–946, 2010.MathSciNetzbMATHCrossRefGoogle Scholar
  44. 44.
    T. W. Körner. Fourier Analysis. Cambridge University Press, 1988.zbMATHCrossRefGoogle Scholar
  45. 45.
    M. Ledoux. The Concentration of Measure Phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society, 2001.Google Scholar
  46. 46.
    M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly. Compressed Sensing MRI. IEEE Signal Process. Mag., 25(2):72–82, March 2008.Google Scholar
  47. 47.
    M. Mishali and Y. C. Eldar. Xampling: Analog to digital at sub-Nyquist rates. IET Circuits, Devices, & Systems, 5(1):8–20, 2011.CrossRefGoogle Scholar
  48. 48.
    M. Mishali, Y. C. Eldar, and A. J. Elron. Xampling: Signal acquisition and processing in union of subspaces. IEEE Trans. Signal Process., 59(10):4719–4734, 2011.MathSciNetCrossRefGoogle Scholar
  49. 49.
    S. K. Mitter. Convex optimization in infinite dimensional spaces. In Recent Advances in Learning and Control, volume 371 of Lecture Notes in Control and Information Science, pages 161–179. Springer, London, 2008.Google Scholar
  50. 50.
    C. Poon. A consistent and stable approach to generalized sampling. J. Fourier Anal. Appl., 20(5):985–1019, 2014.MathSciNetzbMATHCrossRefGoogle Scholar
  51. 51.
    H. Rauhut and R. Ward. Sparse recovery for spherical harmonic expansions. In Proceedings of the 9th International Conference on Sampling Theory and Applications, 2011.Google Scholar
  52. 52.
    H. Rauhut and R. Ward. Sparse Legendre expansions via l1-minimization. J. Approx. Theory, 164(5):517–533, 2012.MathSciNetzbMATHCrossRefGoogle Scholar
  53. 53.
    B. Roman, A. Hansen, and B. Adcock. On asymptotic structure in compressed sensing. arXiv:1406.4178, 2014.
  54. 54.
    M. Rudelson. Random vectors in the isotropic position. J. Funct. Anal., 164(1):60–72, 1999.MathSciNetzbMATHCrossRefGoogle Scholar
  55. 55.
    G. Strang and T. Nguyen. Wavelets and filter banks. Wellesley-Cambridge Press, 1997.zbMATHGoogle Scholar
  56. 56.
    T. Strohmer. Measure what should be measured: progress and challenges in compressive sensing. IEEE Signal Proc Let., 19(12):887–893, 2012.Google Scholar
  57. 57.
    A. M. Stuart. Inverse problems: A Bayesian perspective. Acta Numer., 19:451–559, 2010.MathSciNetzbMATHCrossRefGoogle Scholar
  58. 58.
    B. P. Sutton, D. C. Noll, and J. A. Fessler. Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities. IEEE Trans. Med. Imaging, 22(2):178–188, 2003.CrossRefGoogle Scholar
  59. 59.
    M. Talagrand. New concentration inequalities in product spaces. Invent. Math., 126(3):505–563, 1996.MathSciNetzbMATHCrossRefGoogle Scholar
  60. 60.
    G. Tang, B. Bhaskar, P. Shah, and B. Recht. Compressed sensing off the grid. IEEE Trans. Inform. Theory., 59(11):7465–7490, 2013.MathSciNetCrossRefGoogle Scholar
  61. 61.
    J. A. Tropp. On the conditioning of random subdictionaries. Appl. Comput. Harmon. Anal., 25(1):1–24, 2008.MathSciNetzbMATHCrossRefGoogle Scholar
  62. 62.
    M. Unser. Sampling–50 years after Shannon. Proc. IEEE, 88(4):569–587, 2000.CrossRefGoogle Scholar
  63. 63.
    M. Unser and A. Aldroubi. A general sampling theory for nonideal acquisition devices. IEEE Trans. Signal Process., 42(11):2915–2925, 1994.CrossRefGoogle Scholar
  64. 64.
    M. Vetterli, P. Marziliano, and T. Blu. Sampling signals with finite rate of innovation. IEEE Trans. Signal Process., 50(6):1417–1428, 2002.MathSciNetCrossRefGoogle Scholar

Copyright information

© SFoCM 2015

Authors and Affiliations

  1. 1.Department of MathematicsSimon Fraser UniversityBurnabyCanada
  2. 2.DAMTP, Centre for Mathematical SciencesUniversity of CambridgeCambridgeUK

Personalised recommendations