Skip to main content

Robust Gaussian Process-Based Global Optimization Using a Fully Bayesian Expected Improvement Criterion

  • Conference paper
Learning and Intelligent Optimization (LION 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6683))

Included in the following conference series:

Abstract

We consider the problem of optimizing a real-valued continuous function f, which is supposed to be expensive to evaluate and, consequently, can only be evaluated a limited number of times. This article focuses on the Bayesian approach to this problem, which consists in combining evaluation results and prior information about f in order to efficiently select new evaluation points, as long as the budget for evaluations is not exhausted.

The algorithm called efficient global optimization (EGO), proposed by Jones, Schonlau and Welch (J. Global Optim., 13(4):455–492, 1998), is one of the most popular Bayesian optimization algorithms. It is based on a sampling criterion called the expected improvement (EI), which assumes a Gaussian process prior about f. In the EGO algorithm, the parameters of the covariance of the Gaussian process are estimated from the evaluation results by maximum likelihood, and these parameters are then plugged in the EI sampling criterion. However, it is well-known that this plug-in strategy can lead to very disappointing results when the evaluation results do not carry enough information about f to estimate the parameters in a satisfactory manner.

We advocate a fully Bayesian approach to this problem, and derive an analytical expression for the EI criterion in the case of Student predictive distributions. Numerical experiments show that the fully Bayesian approach makes EI-based optimization more robust while maintaining an average loss similar to that of the EGO algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Törn, A., Zilinskas, A.: Global Optimization. Springer, Berlin (1989)

    Book  MATH  Google Scholar 

  2. Pintér, J.D.: Global optimization. Continuous and Lipschitz optimization: algorithms, implementations and applications. Springer, Heidelberg (1996)

    MATH  Google Scholar 

  3. Zhigljavsky, A., Zilinskas, A.: Stochastic global optimization. Springer, Heidelberg (2007)

    MATH  Google Scholar 

  4. Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to derivative-free optimization. SIAM, Philadelphia (2009)

    Book  MATH  Google Scholar 

  5. Tenne, Y., Goh, C.K.: Computational intelligence in optimization: applications and implementations. Springer, Heidelberg (2010)

    Book  MATH  Google Scholar 

  6. Mockus, J., Tiesis, V., Zilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Dixon, L., Szego, G. (eds.) Towards Global Optimization, vol. 2, pp. 117–129. Elsevier, Amsterdam (1978)

    Google Scholar 

  7. Mockus, J.: Bayesian approach to Global Optimization: Theory and Applications. Kluwer Acad. Publ., Dordrecht (1989)

    Book  MATH  Google Scholar 

  8. Betrò, B.: Bayesian methods in global optimization. Journal of Global Optimization 1, 1–14 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  9. Locatelli, M., Schoen, F.: An adaptive stochastic global optimization algorithm for one-dimensional functions. Annals of Operations Research 58(4), 261–278 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  10. Auger, A., Teytaud, O.: Continuous lunches are free plus the design of optimal optimization algorithms. Algorithmica 57(1), 121–146 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Ginsbourger, D., Le Riche, R.: Towards Gaussian process-based optimization with finite time horizon. In: mODa 9 Advances in Model-Oriented Design and Analysis. Contribution to Statistics, pp. 89–96. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  12. Grünewälder, S., Audibert, J.-Y., Opper, M., Shawe-Taylor, J.: Regret bounds for Gaussian process bandit problems. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). JMLR W&CP, vol. 9, pp. 273–280 (2010)

    Google Scholar 

  13. Bertsekas, D.P.: Dynamic programming and optimal control. Athena Scientific, Belmont (1995)

    MATH  Google Scholar 

  14. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13(4), 455–492 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  15. Forrester, A.I.J., Jones, D.R.: Global optimization of deceptive functions with sparse sampling. In: 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, September 10-12 (2008)

    Google Scholar 

  16. Locatelli, M.: Bayesian algorithms for one-dimensional global optimization. Journal of Global Optimization 10(1), 57–76 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  17. Osborne, M.A.: Bayesian Gaussian Processes for Sequential Prediction Optimisation and Quadrature. PhD thesis, University of Oxford (2010)

    Google Scholar 

  18. Osborne, M.A., Garnett, R., Roberts, S.J.: Gaussian processes for global optimization. In: 3rd International Conference on Learning and Intelligent Optimization (LION3), Online Proceedings, Trento, Italy (2009)

    Google Scholar 

  19. Osborne, M.A., Roberts, S.J., Rogers, A., Ramchurn, S.D., Jennings, N.R.: Towards real-time information processing of sensor network data using computationally efficient multi-output Gaussian processes. In: Proceedings of the 7th International Conference on Information Processing in Sensor Networks, pp. 109–120. IEEE Computer Society, Los Alamitos (2008)

    Google Scholar 

  20. Williams, B., Santner, T., Notz, W.: Sequential Design of Computer Experiments to Minimize Integrated Response Functions. Statistica Sinica 10(4), 1133–1152 (2000)

    MathSciNet  MATH  Google Scholar 

  21. Schonlau, M.: Computer experiments and global optimization. PhD thesis, University of Waterloo, Waterloo, Ontario, Canada (1997)

    Google Scholar 

  22. Schonlau, M., Welch, W.J.: Global optimization with nonparametric function fitting. In: Proceedings of the ASA, Section on Physical and Engineering Sciences, pp. 183–186. Amer. Statist. Assoc. (1996)

    Google Scholar 

  23. Schonlau, M., Welch, W.J., Jones, D.R.: A data analytic approach to Bayesian global optimization. In: Proceedings of the ASA, Section on Physical and Engineering Sciences, pp. 186–191. Amer. Statist. Assoc. (1997)

    Google Scholar 

  24. Forrester, A.I.J., Keane, A.J.: Recent advances in surrogate-based optimization. Progress in Aerospace Sciences 45(1-3), 50–79 (2009)

    Article  Google Scholar 

  25. Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization 21(4), 345–383 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  26. Robert, C.P., Casella, G.: Monte Carlo statistical methods. Springer, Heidelberg (2004)

    Book  MATH  Google Scholar 

  27. Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68(3), 411–436 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  28. Liu, J.S.: Monte Carlo strategies in scientific computing. Springer, Heidelberg (2008)

    MATH  Google Scholar 

  29. O’Hagan, A.: Bayes-Hermite quadrature. Journal of Statistical Planning and Inference 29(3), 245–260 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  30. O’Hagan, A.: Curve Fitting and Optimal Design for Prediction. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 40(1), 1–42 (1978)

    MathSciNet  MATH  Google Scholar 

  31. Handcock, M.S., Stein, M.L.: A Bayesian analysis of Kriging. Technometrics 35(4), 403–410 (1993)

    Article  Google Scholar 

  32. Ginsbourger, D., Helbert, C., Carraro, L.: Discrete mixtures of kernels for kriging-based optimization. Quality and Reliability Engineering International 24, 681–691 (2008)

    Article  Google Scholar 

  33. O’Hagan, A.: Some Bayesian numerical analysis. In: Bayesian Statistics 4: Proceedings of the Fourth Valencia International Meeting, April 15-20, 1991. Oxford University Press, Oxford (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Benassi, R., Bect, J., Vazquez, E. (2011). Robust Gaussian Process-Based Global Optimization Using a Fully Bayesian Expected Improvement Criterion. In: Coello, C.A.C. (eds) Learning and Intelligent Optimization. LION 2011. Lecture Notes in Computer Science, vol 6683. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25566-3_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25566-3_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25565-6

  • Online ISBN: 978-3-642-25566-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics