Abstract
In recent years, the use of sparse recovery techniques in the approximation of high-dimensional functions has garnered increasing interest. In this work we present a survey of recent progress in this emerging topic. Our main focus is on the computation of polynomial approximations of high-dimensional functions on d-dimensional hypercubes. We show that smooth, multivariate functions possess expansions in orthogonal polynomial bases that are not only approximately sparse but possess a particular type of structured sparsity defined by so-called lower sets. This structure can be exploited via the use of weighted ℓ 1 minimization techniques, and, as we demonstrate, doing so leads to sample complexity estimates that are at most logarithmically dependent on the dimension d. Hence the curse of dimensionality – the bane of high-dimensional approximation – is mitigated to a significant extent. We also discuss several practical issues, including unknown noise (due to truncation or numerical error), and highlight a number of open problems and challenges.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
By nonuniform recovery, we mean results that guarantee recovery of a fixed vector c Λ from a single realization of the random matrix A. Conversely, uniform recovery results consider recovery of all sparse (or structured sparse) vectors from a single realization of A. See, for example, [35] for further discussion.
References
B. Adcock, Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math., 1–41 (2017). https://doi.org/10.1007/s10208-017-9350-3
B. Adcock, Infinite-dimensional ℓ 1 minimization and function approximation from pointwise data. Constr. Approx. 45(3), 345–390 (2017)
B. Adcock, A. Bao, S. Brugiapaglia, Correcting for unknown errors in sparse high-dimensional function approximation (2017). arXiv:1711.07622
B. Adcock, A.C. Hansen. Generalized sampling and infinite-dimensional compressed sensing. Found. Comput. Math. 16(5), 1263–1323 (2016)
R.G. Baraniuk, V. Cevher, M.F. Duarte, C. Hedge, Model-based compressive sensing. IEEE Trans. Inform. Theory 56(4), 1982–2001 (2010)
J. Beck, F. Nobile, L. Tamellini, R. Tempone, Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDEs with random coefficients. Comput. Math. Appl. 67(4), 732–751 (2014)
R.E. Bellman, Adaptive Control Processes: A Guided Tour (Princeton University Press, Princeton, 1961)
J. Bigot, C. Boyer, P. Weiss, An analysis of block sampling strategies in compressed sensing. IEEE Trans. Inform. Theory 64(4), 2125–2139 (2016)
T. Blumensath, Sampling theorems for signals from the union of finite-dimensional linear subspaces. IEEE Trans. Inform. Theory 55(4), 1872–1882 (2009)
J.-L. Bouchot, H. Rauhut, C. Schwab, Multi-level Compressed Sensing Petrov-Galerkin discretization of high-dimensional parametric PDEs (2017). arXiv:1701.01671
S. Brugiapaglia, COmpRessed SolvING: sparse approximation of PDEs based on compressed sensing, Ph.D. thesis, Politecnico di Milano, Milano, 2016
S. Brugiapaglia, B. Adcock, Robustness to unknown error in sparse regularization (2017). arXiv:1705.10299
S. Brugiapaglia, F. Nobile, S. Micheletti, S. Perotto, A theoretical study of compressed solving for advection-diffusion-reaction problems. Math. Comput. 87(309), 1–38 (2018)
H.-J. Bungartz, M. Griebel, Sparse grids. Acta Numer. 13, 147–269 (2004)
E.J. Candès, Y. Plan, A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inform. Theory 57(11), 7235–7254 (2011)
E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(1), 489–509 (2006)
A. Chernov, D. Dũng, New explicit-in-dimension estimates for the cardinality of high-dimensional hyperbolic crosses and approximation of functions having mixed smoothness. J. Compl. 32, 92–121 (2016)
A. Chkifa, A. Cohen, R. DeVore, C. Schwab, Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs. Modél. Math. Anal. Numér. 47(1), 253–280 (2013)
A. Chkifa, A. Cohen, G. Migliorati, F. Nobile, R. Tempone, Discrete least squares polynomial approximation with random evaluations – application to parametric and stochastic elliptic PDEs. ESAIM Math. Model. Numer. Anal. 49(3), 815–837 (2015)
A. Chkifa, A. Cohen, C. Schwab, High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs. Found. Comput. Math. 14(4), 601–633 (2014)
A. Chkifa, A. Cohen, C. Schwab, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pures Appl. 103, 400–428 (2015)
A. Chkifa, N. Dexter, H. Tran, C.G. Webster, Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. arXiv:1602.05823 (2016, to appear)
I.-Y. Chun, B. Adcock, Compressed sensing and parallel acquisition. IEEE Trans. Inform. Theory 63(8), 4760–4882 (2017). arXiv:1601.06214
A. Cohen, M.A. Davenport, D. Leviatan, On the stability and accuracy of least squares approximations. Found. Comput. Math. 13, 819–834 (2013)
A. Cohen, R. Devore Approximation of high-dimensional parametric PDEs. Acta Numer. 24, 1–159 (2015)
A. Cohen, R.A. DeVore, C. Schwab, Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math. 10, 615–646 (2010)
A. Cohen, R.A. DeVore, C. Schwab, Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs. Anal. Appl. 9(1), 11–47 (2011)
A. Cohen, G. Migliorati, Optimal weighted least-squares methods (2016). arXiv:1608.00512
A. Cohen, G. Migliorati, F. Nobile, Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension. Constr. Approx. 45(3), 497–519 (2017)
M.A. Davenport, M.F. Duarte, Y.C. Eldar, G. Kutyniok, Introduction to compressed sensing, in Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2011)
D.L. Donoho, Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)
A. Doostan, H. Owhadi, A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 230(8), 3015–3034 (2011)
M.F. Duarte, Y.C. Eldar, Structured compressed sensing: from theory to applications. IEEE Trans. Signal Process. 59(9), 4053–4085 (2011)
S. Foucart, Stability and robustness of ℓ 1-minimizations with weibull matrices and redundant dictionaries. Linear Algebra Appl. 441, 4–21 (2014)
S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Birkhauser, Basel, 2013)
D. Gross, Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57(3), 1548–1566 (2011)
M. Gunzburger, C.G. Webster, G. Zhang, Stochastic finite element methods for partial differential equations with random input data. Acta Numer. 23, 521–650 (2014)
M. Gunzburger, C.G. Webster, G. Zhang, Sparse collocation methods for stochastic interpolation and quadrature, in Handbook of Uncertainty Quantification (Springer, New York, 2016), pp. 1–46
L. Guo, A. Narayan, T. Zhou, Y. Chen, Stochastic collocation methods via L1 minimization using randomized quadratures. SIAM J. Sci. Comput. 39(1), A333–A359 (2017). arXiv:1602.00995
J. Hampton, A. Doostan, Coherence motivated sampling and convergence analysis of least squares polynomial Chaos regression. Comput. Methods Appl. Mech. Eng. 290, 73–97 (2015)
J. Hampton, A. Doostan, Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. J. Comput. Phys. 280, 363–386 (2015)
V.H. Hoang, C. Schwab, Regularity and generalized polynomial chaos approximation of parametric and random 2nd order hyperbolic partial differential equations. Anal. Appl. 10(3), 295–326 (2012)
J.D. Jakeman, M.S. Eldred, K. Sargsyan, Enhancing l 1-minimization estimates of polynomial chaos expansions using basis selection. J. Comput. Phys. 289, 18–34 (2015). arXiv:1407.8093
J.D. Jakeman, A. Narayan, T. Zhou, A generalized sampling and preconditioning scheme for sparse approximation of polynomial chaos expansions. SIAM J. Sci. Comput. 39(3), A1114–A1144 (2017). arXiv:1602.06879
T. Kühn, W. Sickel, T. Ullrich, Approximation of mixed order Sobolev functions on the d-torus: asymptotics, preasymptotics, and d-dependence. Constr. Approx. 42(3), 353–398 (2015)
O.P. Le Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification (Springer, New York, 2010)
L. Mathelin, K.A. Gallivan, A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys. 12(4), 919–954 (2012)
G. Migliorati, Polynomial approximation by means of the random discrete L 2 projection and application to inverse problems for PDEs with stochastic data, Ph.D. thesis, Politecnico di Milano, Milano, 2013
G. Migliorati, Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with downward closed multi-index sets. J. Approx. Theory 189, 137–159 (2015)
G. Migliorati, F. Nobile, Analysis of discrete least squares on multivariate polynomial spaces with evaluations at low-discrepancy point sets. J. Complexity 31(4), 517–542 (2015)
G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Analysis of the discrete L 2 projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, 419–456 (2014)
A. Narayan, T. Zhou, Stochastic collocation on unstructured multivariate meshes. Commun. Comput. Phys. 18(1), 1–36 (2015)
A. Narayan, J.D. Jakeman, T. Zhou, A Christoffel function weighted least squares algorithm for collocation approximations. Math. Comput. 86(306), 1913–1947 (2014). arXiv:1412.4305
F. Nobile, R. Tempone, C.G. Webster, An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46(5), 2411–2442 (2008)
F. Nobile, R. Tempone, C.G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46(5), 2309–2345 (2008)
J. Peng, J. Hampton, A. Doostan, A weighted ℓ 1-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)
J. Peng, J. Hampton, A. Doostan, On polynomial chaos expansion via gradient-enhanced ℓ 1-minimization. J. Comput. Phys. 310, 440–458 (2016)
H. Rauhut, Random sampling of sparse trigonometric polynomials. Appl. Comput. Harmon. Anal. 22(1), 16–42 (2007)
H. Rauhut, C. Schwab, Compressive sensing Petrov-Galerkin approximation of high dimensional parametric operator equations. Math. Comput. 86, 661–700 (2017)
H. Rauhut, R. Ward, Sparse Legendre expansions via ℓ 1-minimization. J. Approx. Theory 164(5), 517–533 (2012)
H. Rauhut, R. Ward, Interpolation via weighted ℓ 1 minimization. Appl. Comput. Harmon. Anal. 40(2), 321–351 (2016)
M.K. Stoyanov, C.G. Webster, A dynamically adaptive sparse grid method for quasi-optimal interpolation of multidimensional functions. Comput. Math. Appl. 71(11), 2449–2465 (2016)
G. Szegö, Orthogonal Polynomials (American Mathematical Society, Providence, RI, 1975)
G. Tang, G. Iaccarino, Subsampled Gauss quadrature nodes for estimating polynomial chaos expansions. SIAM/ASA J. Uncertain. Quantif. 2(1), 423–443 (2014)
H. Tran, C.G. Webster, G. Zhang, Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients. Numer. Math. 137(2), 451–493 (2017). arXiv:1508.01821
Y. Traonmilin, R. Gribonval, Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all. Appl. Comput. Harm. Anal. (2017). https://doi.org/10.1016/j.acha.2016.08.004
E. van den Berg, M.P. Friedlander, SPGL1: a solver for large-scale sparse reconstruction (June 2007), http://www.cs.ubc.ca/labs/scl/spgl1
E. van den Berg, M.P. Friedlander, Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)
C.G. Webster, Sparse grid stochastic collocation techniques for the numerical solution of partial differential equations with random input data, Ph.D. thesis, Florida State University, Tallahassee, 2007
P. Wojtaszczyk, Stability and instance optimality for gaussian measurements in compressed sensing. Found. Comput. Math. 10(1), 1–13 (2010)
Z. Xu, T. Zhou, On sparse interpolation and the design of deterministic interpolation points. SIAM J. Sci. Comput. 36(4), 1752–1769 (2014)
L. Yan, L. Guo, D. Xiu, Stochastic collocation algorithms using ℓ 1-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)
X. Yang, G.E. Karniadakis, Reweighted ℓ 1 minimization method for stochastic elliptic differential equations. J. Comput. Phys. 248, 87–108 (2013)
X. Yang, H. Lei, N.A. Baker, G. Lin, Enhancing sparsity of Hermite polynomial expansions by iterative rotations. J. Comput. Phys. 307, 94–109 (2016). arXiv:1506.04344
Acknowledgements
The first and second authors acknowledge the support of the Alfred P. Sloan Foundation and the Natural Sciences and Engineering Research Council of Canada through grant 611675. The second author acknowledges the Postdoctoral Training Center in Stochastics of the Pacific Institute for the Mathematical Sciences for the support. The third author acknowledges support by the US Defense Advanced Research Projects Agency, Defense Sciences Office under contract and award numbers HR0011619523 and 1868-A017-15; the US Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under contract number ERKJ259 and ERKJ314; and the Laboratory Directed Research and Development program at the Oak Ridge National Laboratory, which is operated by UT-Battelle, LLC., for the US Department of Energy under Contract DE-AC05-00OR22725.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Adcock, B., Brugiapaglia, S., Webster, C.G. (2017). Compressed Sensing Approaches for Polynomial Approximation of High-Dimensional Functions. In: Boche, H., Caire, G., Calderbank, R., März, M., Kutyniok, G., Mathar, R. (eds) Compressed Sensing and its Applications. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-69802-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-69802-1_3
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-319-69801-4
Online ISBN: 978-3-319-69802-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)