Advertisement

Foundations of Computational Mathematics

, Volume 18, Issue 3, pp 661–701 | Cite as

Infinite-Dimensional Compressed Sensing and Function Interpolation

  • Ben Adcock
Article

Abstract

We introduce and analyse a framework for function interpolation using compressed sensing. This framework—which is based on weighted \(\ell ^1\) minimization—does not require a priori bounds on the expansion tail in either its implementation or its theoretical guarantees and in the absence of noise leads to genuinely interpolatory approximations. We also establish a new recovery guarantee for compressed sensing with weighted \(\ell ^1\) minimization based on this framework. This guarantee conveys several benefits. First, unlike existing results, it is sharp (up to constants and log factors) for large classes of functions regardless of the choice of weights. Second, by examining the measurement condition in the recovery guarantee, we are able to suggest a good overall strategy for selecting the weights. In particular, when applied to the important case of multivariate approximation with orthogonal polynomials, this weighting strategy leads to provably optimal estimates on the number of measurements required, whenever the support set of the significant coefficients is a so-called lower set. Finally, this guarantee can also be used to theoretically confirm the benefits of alternative weighting strategies where the weights are chosen based on prior support information. This provides a theoretical basis for a number of recent numerical studies showing the effectiveness of such approaches.

Keywords

High-dimensional approximation Interpolation Compressed sensing Structured sparsity Orthogonal polynomials 

Mathematics Subject Classification

41A05 41A10 41A63 65N12 65N15 

Notes

Acknowledgements

The work was supported in part by the Natural Sciences and Engineering Research Council of Canada through Grant 611675 and an Alfred P. Sloan Research Fellowship. The author would particularly like to thank Abdellah Chkifa, Clayton Webster, Hoang Tran and Guannan Zhang for introducing him to the concept of lower sets. The results of Sect. 7.3 are due to their insight. He would also like to like to thank Rick Archibald, Nilima Nigam, Clarice Poon and Tao Zhou for useful discussions and comments.

References

  1. 1.
    B. Adcock. Infinite-dimensional \(\ell ^1\) minimization and function approximation from pointwise data. Constr. Approx. (to appear), 2016.Google Scholar
  2. 2.
    B. Adcock and A. C. Hansen. Generalized sampling and infinite-dimensional compressed sensing. Found. Comput. Math., 16(5):1263–1323, 2016.MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    B. Adcock, R. Platte, and A. Shadrin. Optimal sampling rates for approximating analytic functions from pointwise samples. arXiv:1610.04769, 2016.Google Scholar
  4. 4.
    V. A. Antonov and K. V. Holšhevnikov. An estimate of the remainder in the expansion of the generating function for the Legendre polynomials (generalization and improvement of Bernstein’s inequality). Vestnik Leningrad. Univ. Mat., 13:163–166, 1981.zbMATHGoogle Scholar
  5. 5.
    B. Bah and R. Ward. The sample complexity of weighted sparse approximation. arxiv:1507.0673, 2015.Google Scholar
  6. 6.
    J. Bigot, C. Boyer, and P. Weiss. An analysis of block sampling strategies in compressed sensing. IEEE Trans. Inform. Theory (to appear), 2016.Google Scholar
  7. 7.
    A. Bourrier, M. E. Davies, T. Peleg, P. Pérez, and R. Gribonval. Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems. IEEE Trans. Inform. Theory, 60(12):7928–7946, 2014.MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    E. J. Candès and Y. Plan. A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inform. Theory, 57(11):7235–7254, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    E. J. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006.MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    A. Chernov and D. Dũng. New explicit-in-dimension estimates for the cardinality of high-dimensional hyperbolic crosses and approximation of functions having mixed smoothness. J. Complexity, 32:92–121, 2016.MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, and R. Tempone. Discrete least squares polynomial approximation with random evaluations-application to parametric and stochastic elliptic PDEs. ESAIM Math. Model. Numer. Anal., 49(3):815–837, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    A. Chkifa, N. Dexter, H. Tran, and C. Webster. Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Technical Report ORNL/TM-2015/497, Oak Ridge National Laboratory (also available as arXiv:1602.05823), 2015.
  13. 13.
    I.-Y. Chun and B. Adcock. Compressed sensing and parallel acquisition. arXiv:1601.06214, 2016.Google Scholar
  14. 14.
    A. Cohen, R. A. DeVore, and C. Schwab. Convergence rates of best \(N\)-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math., 10:615–646, 2010.MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    A. Cohen, R. A. DeVore, and C. Schwab. Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE’s. Analysis and Applications, 9:11–47, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289–1306, 2006.MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    A. Doostan and H. Owhadi. A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys., 230(8):3015–3034, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Birkhauser, 2013.CrossRefzbMATHGoogle Scholar
  19. 19.
    M. Friedlander, H. Mansour, R. Saab, and I. Yilmaz. Recovering compressively sampled signals using partial support information. IEEE Trans. Inform. Theory, 58(2):1122–1134, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    W. Gautschi. How sharp is Bernstein’s inequality for Jacobi polynomials? Electron. Trans. Numer. Anal., 36:1–8, 2009.MathSciNetzbMATHGoogle Scholar
  21. 21.
    D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory, 57(3):1548–1566, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    D. Gross, F. Krahmer, and R. Kueng. A partial derandomization of phaselift using spherical designs. J. Fourier Anal. Appl., 21(2):229–266, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    J. Hampton and A. Doostan. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies. J. Comput. Phys., 280:363–386, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    J. D. Jakeman, A. Narayan, and T. Zhou. A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions. arXiv:1602.06879, 2016.Google Scholar
  25. 25.
    T. Kühn, W. Sickel, and T. Ullrich. Approximation of mixed order Sobolev functions on the \(d\)-torus: Asymptotics, preasymptotics, and \(d\)-dependence. Constr. Approx., 42(3):353–398, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    L. Lorch. Alternative proof of a sharpened form of Bernstein’s inequality for legendre polynomials. Appl. Anal., 14:237–240, 1982/3.Google Scholar
  27. 27.
    L. Mathelin and K. A. Gallivan. A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys., 12(4):919–954, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    G. Migliorati. Polynomial approximation by means of the random discrete \(L^2\) projection and application to inverse problems for PDEs with stochastic data. PhD thesis, Politecnico di Milano, 2013.Google Scholar
  29. 29.
    G. Migliorati. Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with downward closed multi-index sets. J. Approx. Theory, 189:137–159, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone. Analysis of the discrete \(L^2\) projection on polynomial spaces with random evaluations. Found. Comput. Math., 14:419–456, 2014.MathSciNetzbMATHGoogle Scholar
  31. 31.
    A. Narayan, J. D. Jakeman, and T. Zhou. A Christoffel function weighted least squares algorithm for collocation approximations. arXiv:1412.4305, 2014.Google Scholar
  32. 32.
    A. Narayan and T. Zhou. Stochastic collocation on unstructured multivariate meshes. Commun. Comput. Phys., 18(1):1–36, 2015.MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    J. Peng, J. Hampton, and A. Doostan. A weighted \(\ell _1\)-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys., 267:92–111, 2014.MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    R. Platte, L. N. Trefethen, and A. Kuijlaars. Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Rev., 53(2):308–318, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    H. Rauhut and R. Ward. Sparse recovery for spherical harmonic expansions. In Proceedings of the 9th International Conference on Sampling Theory and Applications, 2011.Google Scholar
  36. 36.
    H. Rauhut and R. Ward. Sparse Legendre expansions via l1-minimization. J. Approx. Theory, 164(5):517–533, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    H. Rauhut and R. Ward. Interpolation via weighted \(\ell _1\) minimization. Appl. Comput. Harmon. Anal., 40(2):321–351, 2016.MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    G. Szegö. Orthogonal Polynomials. American Mathematical Society, Providence, RI, 1975.zbMATHGoogle Scholar
  39. 39.
    G. Tang and G. Iaccarino. Subsampled Gauss quadrature nodes for estimating polynomial chaos expansions. SIAM/ASA J. Uncertain. Quantif., 2(1):423–443, 2014.MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    H. Tran, C. Webster, and G. Zhang. Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients. ORNL/TM-2014/468, Oak Ridge National Laboratory (also available as arXiv:1508.01821), 2015.
  41. 41.
    E. van den Berg and M. P. Friedlander. SPGL1: A solver for large-scale sparse reconstruction. http://www.cs.ubc.ca/labs/scl/spgl1, June 2007.
  42. 42.
    E. van den Berg and M. P. Friedlander. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput., 2(890–912), 31.Google Scholar
  43. 43.
    Z. Xu and T. Zhou. On sparse interpolation and the design of deterministic interpolation points. SIAM J. Sci. Comput., 36(4):1752–1769, 2014.MathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    L. Yan, L. Guo, and D. Xiu. Stochastic collocation algorithms using \(\ell _1\)-minimization. Int. J. Uncertain. Quantif., 2(3):279–293, 2012.MathSciNetCrossRefGoogle Scholar
  45. 45.
    X. Yang and G. E. Karniadakis. Reweighted \(\ell _1\) minimization method for stochastic elliptic differential equations. J. Comput. Phys., 248:87–108, 2013.CrossRefzbMATHGoogle Scholar
  46. 46.
    X. Yu and S. Baek. Sufficient conditions on stable recovery of sparse signals with partial support information. IEEE Signal Process. Letters, 20(5), 2013.Google Scholar

Copyright information

© SFoCM 2017

Authors and Affiliations

  1. 1.Department of MathematicsSimon Fraser UniversityBurnabyCanada

Personalised recommendations