Advertisement

Statistics and Computing

, Volume 29, Issue 6, pp 1215–1229 | Cite as

Fast automatic Bayesian cubature using lattice sampling

  • R. JagadeeswaranEmail author
  • Fred J. Hickernell
Article

Abstract

Automatic cubatures approximate integrals to user-specified error tolerances. For high-dimensional problems, it is difficult to adaptively change the sampling pattern, but one can automatically determine the sample size, n, given a reasonable, fixed sampling pattern. We take this approach here using a Bayesian perspective. We postulate that the integrand is an instance of a Gaussian stochastic process parameterized by a constant mean and a covariance kernel defined by a scale parameter times a parameterized function specifying how the integrand values at two different points in the domain are related. These hyperparameters are inferred or integrated out using integrand values via one of three techniques: empirical Bayes, full Bayes, or generalized cross-validation. The sample size, n, is increased until the half-width of the credible interval for the Bayesian posterior mean is no greater than the error tolerance. The process outlined above typically requires a computational cost of \(O(N_{\text {opt}}n^3)\), where \(N_{\text {opt}}\) is the number of optimization steps required to identify the hyperparameters. Our innovation is to pair low discrepancy nodes with matching covariance kernels to lower the computational cost to \(O(N_{\text {opt}} n \log n)\). This approach is demonstrated explicitly with rank-1 lattice sequences and shift-invariant kernels. Our algorithm is implemented in the Guaranteed Automatic Integration Library (GAIL).

Keywords

Bayesian cubature Fast automatic cubature GAIL Probabilistic numeric methods 

Notes

Acknowledgements

This research was supported in part by the National Science Foundation Grants DMS-1522687 and DMS-1638521 (SAMSI). The authors would like to thank the organizers of the SAMSI-Lloyds-Turing Workshop on Probabilistic Numerical Methods, where a preliminary version of this work was discussed. The authors also thank Chris Oates and Sou-Cheng Choi for valuable comments.

References

  1. Briol, F.-X., Oates, C.J., Girolami, M., Osborne, M.A., Sejdinovic, D.: Probabilistic integration: a role in statistical computation? Stat. Sci. 34, 1–22 (2019)MathSciNetCrossRefGoogle Scholar
  2. Choi, S.-C.T., Ding, Y., Hickernell, F.J., Jiang, L., Jiménez Rugama, L.A., Li, D., Jagadeeswaran, R., Tong, X., Zhang, K., Zhang, Y., Zhou, X.: GAIL: guaranteed automatic integration library (versions 1.0–2.3). MATLAB Software. http://gailgithub.github.io/GAIL_Dev/ (2019). Accessed 3 Sept 2019
  3. Cools, R., Nuyens, D. (eds.): Monte Carlo and Quasi-Monte Carlo Methods: MCQMC, Leuven, Belgium, Springer Proceedings in Mathematics and Statistics, 2016, vol. 163. Springer, Berlin (2014)zbMATHGoogle Scholar
  4. Craven, P., Wahba, G.: Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math. 31, 307–403 (1979)MathSciNetzbMATHGoogle Scholar
  5. Diaconis, P.: Bayesian numerical analysis. In: Gupta, S.S., Berger, J.O. (eds.) Statistical Decision Theory and Related Topics IV, Papers from the 4th Purdue Symposium, West Lafayette, Indiana 1986, vol. 1, pp. 163–175. Springer, New York (1988)CrossRefGoogle Scholar
  6. Dick, J., Pillichshammer, F.: Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge University Press, Cambridge (2010)CrossRefGoogle Scholar
  7. Dick, J., Kuo, F., Sloan, I.H.: High dimensional integration—the quasi-Monte Carlo way. Acta Numer. 22, 133–288 (2013).  https://doi.org/10.1017/S0962492913000044 MathSciNetCrossRefzbMATHGoogle Scholar
  8. Genz, A.: Comparison of methods for the computation of multivariate normal probabilities. Comput. Sci. Stat. 25, 400–405 (1993)Google Scholar
  9. Golub, G.H., Heath, M., Wahba, G.: Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21, 215–223 (1979)MathSciNetCrossRefGoogle Scholar
  10. Hickernell, F.J.: Quadrature error bounds with applications to lattice rules. SIAM J. Numer. Anal. 33, 1995–2016 (1996). Corrected printing of sections 3–6 in ibid., 34, 853–866 (1997)Google Scholar
  11. Hickernell, F.J.: The trio identity for quasi-Monte Carlo error analysis. In: Glynn, P., Owen, A. (eds.) Monte Carlo and Quasi-Monte Carlo Methods: MCQMC, Stanford, USA, August 2016, Springer Proceedings in Mathematics and Statistics, pp. 3–27. Springer, Berlin (2018).  https://doi.org/10.1007/978-3-319-91436-7 zbMATHGoogle Scholar
  12. Hickernell, F.J., Jiménez Rugama, L.A.: Reliable adaptive cubature using digital sequences. In: Cools and Nuyens, pp. 367–383 (2016). arXiv:1410.8615 [math.NA]Google Scholar
  13. Hickernell, F.J., Niederreiter, H.: The existence of good extensible rank-1 lattices. J. Complex. 19, 286–300 (2003)MathSciNetCrossRefGoogle Scholar
  14. Hickernell, F.J., Jiménez Rugama, L.A., Li, D.: Adaptive quasi-Monte Carlo methods for cubature. In: Dick, J., Kuo, F.Y., Woźniakowski, H. (eds.) Contemporary Computational Mathematics—A Celebration of the 80th Birthday of Ian Sloan, pp. 597–619. Springer, Berlin (2018).  https://doi.org/10.1007/978-3-319-72456-0 zbMATHGoogle Scholar
  15. Jiménez Rugama, L.A., Hickernell, F.J.: Adaptive multidimensional integration based on rank-1 lattices. In: Cools and Nuyens, pp. 407–422 (2016). arXiv:1411.1966
  16. Keister, B.D.: Multidimensional quadrature algorithms. Comput. Phys. 10, 119–122 (1996).  https://doi.org/10.1063/1.168565 MathSciNetCrossRefGoogle Scholar
  17. O’Hagan, A.: Bayes–Hermite quadrature. J. Stat. Plan. Inference 29, 245–260 (1991).  https://doi.org/10.1016/0378-3758(91)90002-V MathSciNetCrossRefzbMATHGoogle Scholar
  18. Olver, F.W.J., Lozier, D.W., Boisvert, R.F., Clark, C.W., Dalhuis, A.B.O.: Digital Library of Mathematical Functions. http://dlmf.nist.gov/ (2018). Accessed 3 Sept 2019
  19. Rasmussen, C.E., Ghahramani, Z.: Bayesian Monte Carlo. In: Thrun, S., Saul, L.K., Obermayer, K. (eds.) Advances in Neural Information Processing Systems, vol. 15, pp. 489–496. MIT Press, Cambridge (2003)Google Scholar
  20. Rasmussen, C.E., Williams, C.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  21. Ritter, K.: Average-Case Analysis of Numerical Problems. Lecture Notes in Mathematics, vol. 1733. Springer, Berlin (2000)CrossRefGoogle Scholar
  22. Sidi, A.: Further extension of a class of periodizing variable transformations for numerical integration. J. Comput. Appl. Math. 221, 132–149 (2008)MathSciNetCrossRefGoogle Scholar
  23. Sloan, I.H., Joe, S.: Lattice Methods for Multiple Integration. Oxford University Press, Oxford (1994)zbMATHGoogle Scholar
  24. Wahba, G.: Spline models for observational data. In: CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 59. SIAM, Philadelphia (1990)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Applied MathematicsIllinois Institute of TechnologyChicagoUSA
  2. 2.Department of Applied Mathematics, Center for Interdisciplinary Scientific ComputationIllinois Institute of TechnologyChicagoUSA

Personalised recommendations