Advertisement

Constructive Approximation

, Volume 50, Issue 1, pp 167–207 | Cite as

Compressive Hermite Interpolation: Sparse, High-Dimensional Approximation from Gradient-Augmented Measurements

  • Ben AdcockEmail author
  • Yi Sui
Article
  • 31 Downloads

Abstract

We consider the sparse polynomial approximation of a multivariate function on a tensor product domain from samples of both the function and its gradient. When only function samples are prescribed, weighted \(\ell ^1\) minimization has recently been shown to be an effective procedure for computing such approximations. We extend this work to the gradient-augmented case. Our main results show that for the same asymptotic sample complexity, gradient-augmented measurements achieve an approximation error bound in a stronger Sobolev norm, as opposed to the \(L^2\)-norm in the unaugmented case. For Chebyshev and Legendre polynomial approximations, this sample complexity estimate is algebraic in the sparsity s and at most logarithmic in the dimension d, thus mitigating the curse of dimensionality to a substantial extent. We also present several experiments numerically illustrating the benefits of gradient information over an equivalent number of function samples only.

Keywords

Multivariate approximation Orthogonal polynomials Nonlinear approximation Compressed sensing 

Mathematics Subject Classification

41A25 41A05 41A10 65N15 

Notes

Acknowledgements

This work is supported in part by the NSERC Grant 611675 and an Alfred P. Sloan Research Fellowship. Yi Sui also acknowledges support from an NSERC PGSD scholarship.

References

  1. 1.
    Adcock, B.: Infinite-dimensional \(\ell ^1\) minimization and function approximation from pointwise data. Constr. Approx. 45(3), 345–390 (2017)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Adcock, B.: Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math. 18(3), 661–701 (2018)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Adcock, B., Bao, A., Brugiapaglia, S.: Correcting for unknown errors in sparse high-dimensional function approximation (2017). arXiv:1711.07622
  4. 4.
    Adcock, B., Brugiapaglia, S.: Robustness to unknown error in sparse regularization. IEEE Trans. Inform. Theory 64(10), 6638–6661 (2018)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Adcock, B., Brugiapaglia, S., Webster, C.G.: Compressed sensing approaches for polynomial approximation of high-dimensional functions. In: Boche, H., Caire, G., Calderbank, R., März, M., Kutyniok, G., Mathar, R. (eds.) Compressed Sensing and Its Applications, pp. 93–124. Birkhäuser, Berlin (2017)Google Scholar
  6. 6.
    Aleseev, A.K., Navon, I.M., Zelentsov, M.E.: The estimation of functional uncertainty using polynomial chaos and adjoint equations. Int. J. Numer. Methods Fluids 67, 328–341 (2011)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Bigot, J., Boyer, C., Weiss, P.: An analysis of block sampling strategies in compressed sensing. IEEE Trans. Inform. Theory 62(4), 2125–2139 (2016)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Boyer, C., Bigot, J., Weiss, P.: Compressed sensing with structured sparsity and structured acquisition. Appl. Comput. Harmonic Anal. 46(2), 312–350 (2019)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Candès, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)Google Scholar
  10. 10.
    Chkifa, A., Cohen, A., Migliorati, G., Tempone, R.: Discrete least squares polynomial approximation with random evaluations-application to parametric and stochastic elliptic pdes. ESAIM Math. Model. Numer. Anal. 49(3), 815–837 (2015)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Chkifa, A., Cohen, A., Schwab, C.: High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDES. Found. Comput. Math. 14, 601–633 (2014)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Chkifa, A., Cohen, G., Schwab, C.: Breaking the curse of dimensionality in sparse polynomial interpolation and applications to parametric pdes. J. Math. Pures Appl. 103, 400–428 (2015)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Chkifa, A., Dexter, N., Tran, H., Webster, C.G.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. 87(311), 1415–1450 (2018)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Chun, I.-Y., Adcock, B.: Compressed sensing and parallel acquisition. IEEE Trans. Inform. Theory 63(8), 4860–4882 (2017)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Cohen, A., Davenport, M.A., Leviatan, D.: On the stability and accuracy of least squares approximations. Found. Comput. Math. 13, 819–834 (2013)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Cohen, A., Migliorati, G.: Optimal weighted least-squares methods. SMAI J. Comput. Math. 3, 181–203 (2017)MathSciNetGoogle Scholar
  17. 17.
    Cohen, A., Migliorati, G.: Multivariate approximation in downward closed polynomial spaces. In: Dick, J., Kuo, F.Y., Woźniakowski, H. (eds.) Contemporary Computational Mathematics: A Celebration of the 80th Birthday of Ian Sloan, pp. 233–282. Springer, Berlin (2018)Google Scholar
  18. 18.
    Constantine, P.G.: Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies. SIAM, New Delhi (2015)zbMATHGoogle Scholar
  19. 19.
    Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhäuser, Berlin (2013)zbMATHGoogle Scholar
  20. 20.
    Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57(3), 1548–1566 (2011)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Guo, L., Narayan, A., Xiu, D., Zhou, T.: A gradient enhanced \(L^1\) mininization for sparse approximation of polynomial chaos expansions. J. Comput. Phys. 367, 49–64 (2018)MathSciNetGoogle Scholar
  22. 22.
    Hadigol, M., Doostan, A.: Least squares polynomial chaos expansion: a review of sampling strategies. Comput. Methods Appl. Mech. Eng. 332, 382–407 (2018)MathSciNetGoogle Scholar
  23. 23.
    Hampton, J., Doostan, A.: Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. J. Comput. Phys. 280, 363–386 (2015)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Komkov, V., Choi, K.K., Haug, E.J.: Design Sensitivity Analysis of Structural Systems, vol. 177. Academic Presss, Cambridge (1986)zbMATHGoogle Scholar
  25. 25.
    Li, Y., Anitescu, M., Roderick, O., Hickernell, F.: Orthogonal bases for polynomial regression with derivative information in uncertainty quantification. Int. J. Uncertain. Quantif. 1(4), 297–320 (2011)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Lockwood, B., Mavriplis, D.: Gradient-based methods for uncertainty quantification in hypersonic flows. Comput. Fluids 85, 27–38 (2013)MathSciNetzbMATHGoogle Scholar
  27. 27.
    Migliorati, G.: Multivariate Markov-type and Nikolskii-type inequalities for polynomials associated with downward closed multi-index sets. J. Approx. Theory 189, 137–159 (2015)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Migliorati, G., Nobile, F., von Schwerin, E., Tempone, R.: Approximation of quantities of interest in stochastic pdes by the random discrete \(l^2\) projection on polynomial spaces. SIAM J. Sci. Comput. 35(3), A1440–A1460 (2013)zbMATHGoogle Scholar
  29. 29.
    Migliorati, G., Nobile, F., von Schwerin, E., Tempone, R.: Analysis of the discrete \(l^2\) projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, 419–456 (2014)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Peng, J., Hampton, J., Doostan, A.: A weighted \(\ell _1\)-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Peng, J., Hampton, J., Doostan, A.: On polynomial chaos expansion via gradient-enhanced \(\ell _1\)-minimization. J. Comput. Phys. 310, 440–458 (2016)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Rauhut, H., Ward, R.: Sparse legendre expansions via \(\ell _1\)-minimization. J. Approx. Theory 164(5), 517–533 (2012)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Rauhut, H., Ward, R.: Interpolation via weighted \(\ell _1\) minimization. Appl. Comput. Harmon. Anal. 40(2), 321–351 (2016)MathSciNetzbMATHGoogle Scholar
  34. 34.
    Seshadri, P., Narayan, A., Mahadevan, S.: Effectively subsampled quadratures for least squares polynomials approximations. SIAM/ASA J. Uncertain. Quantif. 5, 1003–1023 (2017)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Szegö, G.: Orthogonal Polynomials. American Mathematical Society, Providence (1975)zbMATHGoogle Scholar
  36. 36.
    Tang, G.: Methods for high dimensional uncertainty quantification: regularization, sensitivity analysis, and derivative enhancement. PhD thesis, Stanford University (2013)Google Scholar
  37. 37.
    van den Berg, E., Friedlander, M.P.: SPGL1: A solver for large-scale sparse reconstruction (2007, June). http://www.cs.ubc.ca/~mpf/spgl1/. Accessed Jan 2016
  38. 38.
    van den Berg, E., Friedlander, M.P.: Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput 31(2), 890–912 (2008)MathSciNetzbMATHGoogle Scholar
  39. 39.
    Xu, Z., Zhou, T.: A gradient enhanced \(L^1\) recovery for sparse Fourier expansions (Preprint) (2017)Google Scholar
  40. 40.
    Yan, L., Guo, L., Xiu, D.: Stochastic collocation algorithms using \(\ell _1\)-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)MathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of MathematicsSimon Fraser UniversityBurnabyCanada

Personalised recommendations