Skip to main content

Compressed Sensing Approaches for Polynomial Approximation of High-Dimensional Functions

  • Chapter
  • First Online:
Compressed Sensing and its Applications

Abstract

In recent years, the use of sparse recovery techniques in the approximation of high-dimensional functions has garnered increasing interest. In this work we present a survey of recent progress in this emerging topic. Our main focus is on the computation of polynomial approximations of high-dimensional functions on d-dimensional hypercubes. We show that smooth, multivariate functions possess expansions in orthogonal polynomial bases that are not only approximately sparse but possess a particular type of structured sparsity defined by so-called lower sets. This structure can be exploited via the use of weighted 1 minimization techniques, and, as we demonstrate, doing so leads to sample complexity estimates that are at most logarithmically dependent on the dimension d. Hence the curse of dimensionality – the bane of high-dimensional approximation – is mitigated to a significant extent. We also discuss several practical issues, including unknown noise (due to truncation or numerical error), and highlight a number of open problems and challenges.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    By nonuniform recovery, we mean results that guarantee recovery of a fixed vector c Λ from a single realization of the random matrix A. Conversely, uniform recovery results consider recovery of all sparse (or structured sparse) vectors from a single realization of A. See, for example, [35] for further discussion.

References

  1. B. Adcock, Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math., 1–41 (2017). https://doi.org/10.1007/s10208-017-9350-3

  2. B. Adcock, Infinite-dimensional 1 minimization and function approximation from pointwise data. Constr. Approx. 45(3), 345–390 (2017)

    Google Scholar 

  3. B. Adcock, A. Bao, S. Brugiapaglia, Correcting for unknown errors in sparse high-dimensional function approximation (2017). arXiv:1711.07622

    Google Scholar 

  4. B. Adcock, A.C. Hansen. Generalized sampling and infinite-dimensional compressed sensing. Found. Comput. Math. 16(5), 1263–1323 (2016)

    Google Scholar 

  5. R.G. Baraniuk, V. Cevher, M.F. Duarte, C. Hedge, Model-based compressive sensing. IEEE Trans. Inform. Theory 56(4), 1982–2001 (2010)

    Google Scholar 

  6. J. Beck, F. Nobile, L. Tamellini, R. Tempone, Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDEs with random coefficients. Comput. Math. Appl. 67(4), 732–751 (2014)

    Google Scholar 

  7. R.E. Bellman, Adaptive Control Processes: A Guided Tour (Princeton University Press, Princeton, 1961)

    Google Scholar 

  8. J. Bigot, C. Boyer, P. Weiss, An analysis of block sampling strategies in compressed sensing. IEEE Trans. Inform. Theory 64(4), 2125–2139 (2016)

    Google Scholar 

  9. T. Blumensath, Sampling theorems for signals from the union of finite-dimensional linear subspaces. IEEE Trans. Inform. Theory 55(4), 1872–1882 (2009)

    Google Scholar 

  10. J.-L. Bouchot, H. Rauhut, C. Schwab, Multi-level Compressed Sensing Petrov-Galerkin discretization of high-dimensional parametric PDEs (2017). arXiv:1701.01671

    Google Scholar 

  11. S. Brugiapaglia, COmpRessed SolvING: sparse approximation of PDEs based on compressed sensing, Ph.D. thesis, Politecnico di Milano, Milano, 2016

    Google Scholar 

  12. S. Brugiapaglia, B. Adcock, Robustness to unknown error in sparse regularization (2017). arXiv:1705.10299

    Google Scholar 

  13. S. Brugiapaglia, F. Nobile, S. Micheletti, S. Perotto, A theoretical study of compressed solving for advection-diffusion-reaction problems. Math. Comput. 87(309), 1–38 (2018)

    Google Scholar 

  14. H.-J. Bungartz, M. Griebel, Sparse grids. Acta Numer. 13, 147–269 (2004)

    Google Scholar 

  15. E.J. Candès, Y. Plan, A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inform. Theory 57(11), 7235–7254 (2011)

    Google Scholar 

  16. E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(1), 489–509 (2006)

    Google Scholar 

  17. A. Chernov, D. Dũng, New explicit-in-dimension estimates for the cardinality of high-dimensional hyperbolic crosses and approximation of functions having mixed smoothness. J. Compl. 32, 92–121 (2016)

    Google Scholar 

  18. A. Chkifa, A. Cohen, R. DeVore, C. Schwab, Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs. Modél. Math. Anal. Numér. 47(1), 253–280 (2013)

    Google Scholar 

  19. A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, R. Tempone, Discrete least squares polynomial approximation with random evaluations – application to parametric and stochastic elliptic PDEs. ESAIM Math. Model. Numer. Anal. 49(3), 815–837 (2015)

    Google Scholar 

  20. A. Chkifa, A. Cohen, C. Schwab, High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs. Found. Comput. Math. 14(4), 601–633 (2014)

    Google Scholar 

  21. A. Chkifa, A. Cohen, C. Schwab, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pures Appl. 103, 400–428 (2015)

    Google Scholar 

  22. A. Chkifa, N. Dexter, H. Tran, C.G. Webster, Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. arXiv:1602.05823 (2016, to appear)

    Google Scholar 

  23. I.-Y. Chun, B. Adcock, Compressed sensing and parallel acquisition. IEEE Trans. Inform. Theory 63(8), 4760–4882 (2017). arXiv:1601.06214

    Google Scholar 

  24. A. Cohen, M.A. Davenport, D. Leviatan, On the stability and accuracy of least squares approximations. Found. Comput. Math. 13, 819–834 (2013)

    Google Scholar 

  25. A. Cohen, R. Devore Approximation of high-dimensional parametric PDEs. Acta Numer. 24, 1–159 (2015)

    Google Scholar 

  26. A. Cohen, R.A. DeVore, C. Schwab, Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math. 10, 615–646 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  27. A. Cohen, R.A. DeVore, C. Schwab, Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs. Anal. Appl. 9(1), 11–47 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  28. A. Cohen, G. Migliorati, Optimal weighted least-squares methods (2016). arXiv:1608.00512

    Google Scholar 

  29. A. Cohen, G. Migliorati, F. Nobile, Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension. Constr. Approx. 45(3), 497–519 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  30. M.A. Davenport, M.F. Duarte, Y.C. Eldar, G. Kutyniok, Introduction to compressed sensing, in Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2011)

    Google Scholar 

  31. D.L. Donoho, Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  32. A. Doostan, H. Owhadi, A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 230(8), 3015–3034 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  33. M.F. Duarte, Y.C. Eldar, Structured compressed sensing: from theory to applications. IEEE Trans. Signal Process. 59(9), 4053–4085 (2011)

    Article  MathSciNet  Google Scholar 

  34. S. Foucart, Stability and robustness of 1-minimizations with weibull matrices and redundant dictionaries. Linear Algebra Appl. 441, 4–21 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  35. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Birkhauser, Basel, 2013)

    Book  MATH  Google Scholar 

  36. D. Gross, Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57(3), 1548–1566 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  37. M. Gunzburger, C.G. Webster, G. Zhang, Stochastic finite element methods for partial differential equations with random input data. Acta Numer. 23, 521–650 (2014)

    Article  MathSciNet  Google Scholar 

  38. M. Gunzburger, C.G. Webster, G. Zhang, Sparse collocation methods for stochastic interpolation and quadrature, in Handbook of Uncertainty Quantification (Springer, New York, 2016), pp. 1–46

    Google Scholar 

  39. L. Guo, A. Narayan, T. Zhou, Y. Chen, Stochastic collocation methods via L1 minimization using randomized quadratures. SIAM J. Sci. Comput. 39(1), A333–A359 (2017). arXiv:1602.00995

    Google Scholar 

  40. J. Hampton, A. Doostan, Coherence motivated sampling and convergence analysis of least squares polynomial Chaos regression. Comput. Methods Appl. Mech. Eng. 290, 73–97 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  41. J. Hampton, A. Doostan, Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. J. Comput. Phys. 280, 363–386 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  42. V.H. Hoang, C. Schwab, Regularity and generalized polynomial chaos approximation of parametric and random 2nd order hyperbolic partial differential equations. Anal. Appl. 10(3), 295–326 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  43. J.D. Jakeman, M.S. Eldred, K. Sargsyan, Enhancing l 1-minimization estimates of polynomial chaos expansions using basis selection. J. Comput. Phys. 289, 18–34 (2015). arXiv:1407.8093

    Google Scholar 

  44. J.D. Jakeman, A. Narayan, T. Zhou, A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions. SIAM J. Sci. Comput. 39(3), A1114–A1144 (2017). arXiv:1602.06879

    Google Scholar 

  45. T. Kühn, W. Sickel, T. Ullrich, Approximation of mixed order Sobolev functions on the d-torus: asymptotics, preasymptotics, and d-dependence. Constr. Approx. 42(3), 353–398 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  46. O.P. Le Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification (Springer, New York, 2010)

    Book  MATH  Google Scholar 

  47. L. Mathelin, K.A. Gallivan, A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys. 12(4), 919–954 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  48. G. Migliorati, Polynomial approximation by means of the random discrete L 2 projection and application to inverse problems for PDEs with stochastic data, Ph.D. thesis, Politecnico di Milano, Milano, 2013

    Google Scholar 

  49. G. Migliorati, Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with downward closed multi-index sets. J. Approx. Theory 189, 137–159 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  50. G. Migliorati, F. Nobile, Analysis of discrete least squares on multivariate polynomial spaces with evaluations at low-discrepancy point sets. J. Complexity 31(4), 517–542 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  51. G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Analysis of the discrete L 2 projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, 419–456 (2014)

    MathSciNet  MATH  Google Scholar 

  52. A. Narayan, T. Zhou, Stochastic collocation on unstructured multivariate meshes. Commun. Comput. Phys. 18(1), 1–36 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  53. A. Narayan, J.D. Jakeman, T. Zhou, A Christoffel function weighted least squares algorithm for collocation approximations. Math. Comput. 86(306), 1913–1947 (2014). arXiv:1412.4305

    Google Scholar 

  54. F. Nobile, R. Tempone, C.G. Webster, An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46(5), 2411–2442 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  55. F. Nobile, R. Tempone, C.G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46(5), 2309–2345 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  56. J. Peng, J. Hampton, A. Doostan, A weighted 1-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  57. J. Peng, J. Hampton, A. Doostan, On polynomial chaos expansion via gradient-enhanced 1-minimization. J. Comput. Phys. 310, 440–458 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  58. H. Rauhut, Random sampling of sparse trigonometric polynomials. Appl. Comput. Harmon. Anal. 22(1), 16–42 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  59. H. Rauhut, C. Schwab, Compressive sensing Petrov-Galerkin approximation of high dimensional parametric operator equations. Math. Comput. 86, 661–700 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  60. H. Rauhut, R. Ward, Sparse Legendre expansions via 1-minimization. J. Approx. Theory 164(5), 517–533 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  61. H. Rauhut, R. Ward, Interpolation via weighted 1 minimization. Appl. Comput. Harmon. Anal. 40(2), 321–351 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  62. M.K. Stoyanov, C.G. Webster, A dynamically adaptive sparse grid method for quasi-optimal interpolation of multidimensional functions. Comput. Math. Appl. 71(11), 2449–2465 (2016)

    Article  MathSciNet  Google Scholar 

  63. G. Szegö, Orthogonal Polynomials (American Mathematical Society, Providence, RI, 1975)

    MATH  Google Scholar 

  64. G. Tang, G. Iaccarino, Subsampled Gauss quadrature nodes for estimating polynomial chaos expansions. SIAM/ASA J. Uncertain. Quantif. 2(1), 423–443 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  65. H. Tran, C.G. Webster, G. Zhang, Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients. Numer. Math. 137(2), 451–493 (2017). arXiv:1508.01821

    Google Scholar 

  66. Y. Traonmilin, R. Gribonval, Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all. Appl. Comput. Harm. Anal. (2017). https://doi.org/10.1016/j.acha.2016.08.004

    Google Scholar 

  67. E. van den Berg, M.P. Friedlander, SPGL1: a solver for large-scale sparse reconstruction (June 2007), http://www.cs.ubc.ca/labs/scl/spgl1

    Google Scholar 

  68. E. van den Berg, M.P. Friedlander, Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  69. C.G. Webster, Sparse grid stochastic collocation techniques for the numerical solution of partial differential equations with random input data, Ph.D. thesis, Florida State University, Tallahassee, 2007

    Google Scholar 

  70. P. Wojtaszczyk, Stability and instance optimality for gaussian measurements in compressed sensing. Found. Comput. Math. 10(1), 1–13 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  71. Z. Xu, T. Zhou, On sparse interpolation and the design of deterministic interpolation points. SIAM J. Sci. Comput. 36(4), 1752–1769 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  72. L. Yan, L. Guo, D. Xiu, Stochastic collocation algorithms using 1-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  73. X. Yang, G.E. Karniadakis, Reweighted 1 minimization method for stochastic elliptic differential equations. J. Comput. Phys. 248, 87–108 (2013)

    Article  MATH  Google Scholar 

  74. X. Yang, H. Lei, N.A. Baker, G. Lin, Enhancing sparsity of Hermite polynomial expansions by iterative rotations. J. Comput. Phys. 307, 94–109 (2016). arXiv:1506.04344

    Google Scholar 

Download references

Acknowledgements

The first and second authors acknowledge the support of the Alfred P. Sloan Foundation and the Natural Sciences and Engineering Research Council of Canada through grant 611675. The second author acknowledges the Postdoctoral Training Center in Stochastics of the Pacific Institute for the Mathematical Sciences for the support. The third author acknowledges support by the US Defense Advanced Research Projects Agency, Defense Sciences Office under contract and award numbers HR0011619523 and 1868-A017-15; the US Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under contract number ERKJ259 and ERKJ314; and the Laboratory Directed Research and Development program at the Oak Ridge National Laboratory, which is operated by UT-Battelle, LLC., for the US Department of Energy under Contract DE-AC05-00OR22725.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ben Adcock .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Adcock, B., Brugiapaglia, S., Webster, C.G. (2017). Compressed Sensing Approaches for Polynomial Approximation of High-Dimensional Functions. In: Boche, H., Caire, G., Calderbank, R., März, M., Kutyniok, G., Mathar, R. (eds) Compressed Sensing and its Applications. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-69802-1_3

Download citation

Publish with us

Policies and ethics