Skip to main content
Log in

Expected improvement for expensive optimization: a review

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

The expected improvement (EI) algorithm is a very popular method for expensive optimization problems. In the past twenty years, the EI criterion has been extended to deal with a wide range of expensive optimization problems. This paper gives a comprehensive review of the EI extensions designed for parallel optimization, multiobjective optimization, constrained optimization, noisy optimization, multi-fidelity optimization and high-dimensional optimization. The main challenges of extending the EI approach to solve these complex optimization problems are pointed out, and the ideas proposed in literature to tackle these challenges are highlighted. For each reviewed algorithm, the surrogate modeling method, the computation of the infill criterion and the internal optimization of the infill criterion are carefully studied and compared. In addition, the monotonicity properties of the multiobjective EI criteria and constrained EI criteria are analyzed in detail. Through this review, we give an organized summary about the EI developments in the past twenty years and show a clear picture about how the EI approach has advanced. In the end of this paper, several interesting problems and future research topics about the EI developments are given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Amine Bouhlel, M., Bartoli, N., Regis, R.G., Otsmane, A., Morlier, J.: Efficient global optimization for high-dimensional constrained problems by using the kriging models combined with the partial least squares method. Eng. Optim. 50(12), 2038–2053 (2018)

    MathSciNet  Google Scholar 

  2. Bartoli, N., Lefebvre, T., Dubreuil, S., Olivanti, R., Bons, N., Martins, J.R.R.A., Bouhlel, M.A., Morlier, J.: An adaptive optimization strategy based on mixture of experts for wing aerodynamic design optimization. In: 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference. American Institute of Aeronautics and Astronautics Inc, AIAA (2017)

  3. Bartoli, N., Lefebvre, T., Dubreuil, S., Olivanti, R., Priem, R., Bons, N., Martins, J.R.R.A., Morlier, J.: Adaptive modeling strategy for constrained global optimization with application to aerodynamic wing design. Aerosp. Sci. Technol. 90, 85–102 (2019)

    Google Scholar 

  4. Bartz-Beielstein, T., Lasarczyk, C.W.G., Preuss, M.: Sequential parameter optimization. IEEE Cong. Evolut. Comput. 1, 773–780 (2005)

    MATH  Google Scholar 

  5. Basudhar, A., Dribusch, C., Lacaze, S., Missoum, S.: Constrained efficient global optimization with support vector machines. Struct. Multidiscip. Optim. 46(2), 201–221 (2012)

    MATH  Google Scholar 

  6. Bautista, D.C.: A sequential design for approximating the pareto front using the expected pareto improvement function. Ph.D. thesis, The Ohio State University (2009)

  7. Beaucaire, P., Beauthier, C., Sainvitu, C.: Multi-point infill sampling strategies exploiting multiple surrogate models. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1559–1567. ACM (2019)

  8. Bect, J., Bachoc, F., Ginsbourger, D.: A supermartingale approach to gaussian process based sequential design of experiments. Bernoulli 25(4A), 2883–2919 (2019)

    MathSciNet  MATH  Google Scholar 

  9. Bect, J., Ginsbourger, D., Li, L., Picheny, V., Vazquez, E.: Sequential design of computer experiments for the estimation of a probability of failure. Stat. Comput. 22(3), 773–793 (2012)

    MathSciNet  MATH  Google Scholar 

  10. Benassi, R., Bect, J., Vazquez, E.: Robust gaussian process-based global optimization using a fully bayesian expected improvement criterion. In: Learning and Intelligent Optimization. Lecture Notes in Computer Science, pp. 176–190. Springer, Berlin (2011)

  11. Bergstra, J.S., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Neural Information Processing Systems, pp. 2546–2554 (2011)

  12. Berk, J., Nguyen, V., Gupta, S., Rana, S., Venkatesh, S.: Exploration enhanced expected improvement for bayesian optimization. Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science, pp. 621–637. Springer, Cham (2019)

    Google Scholar 

  13. Binois, M., Ginsbourger, D., Roustant, O.: Quantifying uncertainty on pareto fronts with gaussian process conditional simulations. Eur. J. Oper. Res. 243(2), 386–394 (2015)

    MathSciNet  MATH  Google Scholar 

  14. Binois, M., Ginsbourger, D., Roustant, O.: A warped kernel improving robustness in Bayesian optimization via random embeddings. In: International Conference on Learning and Intelligent Optimization, pp. 281–286 (2015)

  15. Binois, M., Ginsbourger, D., Roustant, O.: On the choice of the low-dimensional domain for global optimization via random embeddings. J. Global Optim. 76(1), 69–90 (2020)

    MathSciNet  MATH  Google Scholar 

  16. Binois, M., Picheny, V.: GPareto: An R package for gaussian-process-based multi-objective optimization and analysis. J. Stat. Softw. 89(8), 30 (2019)

    Google Scholar 

  17. Bischl, B., Wessing, S., Bauer, N., Friedrichs, K., Weihs, C.: MOI-MBO: multiobjective infill for parallel model-based optimization. In: Pardalos, P.M., Resende, M.G.C., Vogiatzis, C., Walteros, J.L. (eds.) Learning and Intelligent Optimization. Lecture Notes in Computer Science, vol. 8426, pp. 173–186. Springer, Berlin (2014)

    Google Scholar 

  18. Bull, A.D.: Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. 12, 2879–2904 (2011)

    MathSciNet  MATH  Google Scholar 

  19. Cai, X., Qiu, H., Gao, L., Yang, P., Shao, X.: A multi-point sampling method based on kriging for global optimization. Struct. Multidiscip. Optim. 56(1), 71–88 (2017)

    MathSciNet  Google Scholar 

  20. Chaudhuri, A., Haftka, R., Watson, L.: How to decide whether to run one more cycle in efficient global optimization. In: 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference (2012)

  21. Chaudhuri, A., Haftka, R.T.: A stopping criterion for surrogate based optimization using ego. In: 10th World Congress on Structural and Multidisciplinary Optimization (2013)

  22. Chevalier, C., Ginsbourger, D.: Fast computation of the multi-points expected improvement with applications in batch selection. In: Nicosia, G., Pardalos, P. (eds.) Learning and Intelligent Optimization. Lecture Notes in Computer Science, vol. 7997, pp. 59–69. Springer, Berlin (2013)

    Google Scholar 

  23. Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of multiobjective probability of improvement and expected improvement criteria for pareto optimization. J. Global Optim. 60(3), 575–594 (2014)

    MathSciNet  MATH  Google Scholar 

  24. Cox, D.D., John, S.: SDO: a statistical method for global optimization. In: Alexandrov, N., Hussaini, M. (eds.) Multidisciplinary Design Optimization: State of the Art, pp. 315–329. SIAM, Philadelphia (1997)

    Google Scholar 

  25. Cressie, N.: Statistics for Spatial Data. Wiley, Hoboken (1993)

    MATH  Google Scholar 

  26. Deng, X., Lin, C.D., Liu, K.W., Rowe, R.K.: Additive gaussian process for computer models with qualitative and quantitative factors. Technometrics 59(3), 283–292 (2017)

    MathSciNet  Google Scholar 

  27. Durantin, C., Marzat, J., Balesdent, M.: Analysis of multi-objective kriging-based methods for constrained global optimization. Comput. Optim. Appl. 63(3), 903–926 (2016)

    MathSciNet  MATH  Google Scholar 

  28. Emmerich, M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume measure as selection criterion. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) Evolutionary Multi-criterion Optimization. Lecture Notes in Computer Science, pp. 62–76. Springer, Berlin (2005)

    MATH  Google Scholar 

  29. Emmerich, M., Yang, K., Deutz, A., Wang, H., Fonseca, C.M.: A multicriteria generalization of bayesian global optimization. In: Pardalos, P.M., Zhigljavsky, A., Žilinskas, J. (eds.) Advances in Stochastic and Deterministic Global Optimization, pp. 229–242. Springer, Cham (2016)

    MATH  Google Scholar 

  30. Emmerich, M.T., Deutz, A.H., Klinkenberg, J.W.: Hypervolume-based expected improvement: Monotonicity properties and exact computation. In: IEEE Congress on Evolutionary Computation, pp. 2147–2154 (2011)

  31. Emmerich, M.T.M., Giannakoglou, K.C., Naujoks, B.: Single- and multiobjective evolutionary optimization assisted by gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006)

    Google Scholar 

  32. Eriksson, D., Pearce, M., Gardner, J., Turner, R.D., Poloczek, M.: Scalable global optimization via local bayesian optimization. In: Advances in Neural Information Processing Systems, pp. 5497–5508 (2019)

  33. Fang, K.T., Li, R., Sudjianto, A.: Design and Modeling for Computer Experiments. Chapman and Hall/CRC, London (2005)

    MATH  Google Scholar 

  34. Feliot, P., Bect, J., Vazquez, E.: A bayesian approach to constrained single- and multi-objective optimization. J. Global Optim. 67(1), 97–133 (2017)

    MathSciNet  MATH  Google Scholar 

  35. Feng, Z.W., Zhang, Q.B., Zhang, Q.F., Tang, Q.G., Yang, T., Ma, Y.: A multiobjective optimization based framework to balance the global exploration and local exploitation in expensive optimization. J. Global Optim. 61(4), 677–694 (2015)

    MathSciNet  MATH  Google Scholar 

  36. Forrester, A.I.J., Keane, A.J.: Recent advances in surrogate-based optimization. Prog. Aerosp. Sci. 45(1), 50–79 (2009)

    Google Scholar 

  37. Forrester, A.I.J., Keane, A.J., Bressloff, N.W.: Design and analysis of “noisy” computer experiments. AIAA J. 44(10), 2331 (2006)

    Google Scholar 

  38. Forrester, A.I.J., Sóbester, A., Keane, A.J.: Multi-fidelity optimization via surrogate modelling. Proc. R. Soc. A 463(2088), 3251–3269 (2007)

    MathSciNet  MATH  Google Scholar 

  39. Forrester, A.I.J., Sóbester, A., Keane, A.J.: Engineering Design Via Surrogate Modelling: A Practical Guide. Wiley, Hoboken (2008)

    Google Scholar 

  40. Franey, M., Ranjan, P., Chipman, H.: Branch and bound algorithms for maximizing expected improvement functions. J. Stat. Plan. Inference 141(1), 42–55 (2011)

    MathSciNet  MATH  Google Scholar 

  41. Frazier, P., Powell, W., Dayanik, S.: The knowledge-gradient policy for correlated normal beliefs. INFORMS J. Comput. 21(4), 599–613 (2009)

    MathSciNet  MATH  Google Scholar 

  42. Frazier, P.I.: A tutorial on Bayesian optimization. arXiv (2018). arXiv: 1807.02811

  43. Frazier, P.I., Powell, W.B., Dayanik, S.: A knowledge-gradient policy for sequential information collection. SIAM J. Control Optim. 47(5), 2410–2439 (2008)

    MathSciNet  MATH  Google Scholar 

  44. Gardner, J.R., Kusner, M.J., Xu, Z.E., Weinberger, K.Q., Cunningham, J.P.: Bayesian optimization with inequality constraints. In: Proceedings of the 31st International Conference on Machine Learning, pp. 937–945 (2014)

  45. Garnett, R., Osborne, M.A., Hennig, P.: Active learning of linear embeddings for Gaussian processes. In: Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pp. 230–239 (2014)

  46. Gelbart, M.A., Snoek, J., Adams, R.P.: Bayesian optimization with unknown constraints. In: Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pp. 250–259 (2014)

  47. Ginsbourger, D., Baccou, J., Chevalier, C., Perales, F., Garland, N., Monerie, Y.: Bayesian adaptive reconstruction of profile optima and optimizers. SIAM/ASA J. Uncertain. Quant. 2(1), 490–510 (2014)

    MathSciNet  MATH  Google Scholar 

  48. Ginsbourger, D., Helbert, C., Carraro, L.: Discrete mixtures of kernels for kriging-based optimization. Qual. Reliab. Eng. Int. 24(6), 681–691 (2008)

    Google Scholar 

  49. Ginsbourger, D., Le Riche, R., Carraro, L.: Kriging is well-suited to parallelize optimization. In: Tenne, Y., Goh, C.K. (eds.) Computational Intelligence in Expensive Optimization Problems, Adaptation Learning and Optimization, chap. 6, vol. 2, pp. 131–162. Springer, Berlin (2010)

    Google Scholar 

  50. Ginsbourger, D., Rosspopoff, B., Pirot, G., Durrande, N., Renard, P.: Distance-based kriging relying on proxy simulations for inverse conditioning. Adv. Water Resour. 52, 275–291 (2013)

    Google Scholar 

  51. Gneiting, T.: Compactly supported correlation functions. J. Multiva. Anal. 83(2), 493–508 (2002)

    MathSciNet  MATH  Google Scholar 

  52. Gonzalez, J., Dai, Z., Hennig, P., Lawrence, N.: Batch bayesian optimization via local penalization. In: International Conference on Artificial Intelligence and Statistics, pp. 648–657 (2016)

  53. Gramacy, R.B., Gray, G.A., Le Digabel, S., Lee, H.K.H., Ranjan, P., Wells, G., Wild, S.M.: Modeling an augmented lagrangian for blackbox constrained optimization. Technometrics 58(1), 1–11 (2016)

    MathSciNet  Google Scholar 

  54. Grobler, C., Kok, S., Wilke, D.N.: Simple intuitive multi-objective parallelization of efficient global optimization: SIMPLE-EGO. In: Schumacher, A., Vietor, T., Fiebig, S., Bletzinger, K.U., Maute, K. (eds.) Advances in Structural and Multidisciplinary Optimization: Proceedings of the 12th World Congress of Structural and Multidisciplinary Optimization, pp. 205–220. Springer, Cham (2018)

    Google Scholar 

  55. Gutmann, H.M.: A radial basis function method for global optimization. J. Global Optim. 19(3), 201–227 (2001)

    MathSciNet  MATH  Google Scholar 

  56. Haftka, R.T., Villanueva, D., Chaudhuri, A.: Parallel surrogate-assisted global optimization with expensive functions—a survey. Struct. Multidiscip. Optim. 54(1), 3–13 (2016)

    MathSciNet  Google Scholar 

  57. Hamza, K., Shalaby, M.: A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization. Eng. Optim. 46(9), 1200–1221 (2014)

    Google Scholar 

  58. Han, Z.H., Görtz, S.: Hierarchical kriging model for variable-fidelity surrogate modeling. AIAA J. 50(9), 1885–1896 (2012)

    Google Scholar 

  59. He, X., Tuo, R., Wu, C.F.J.: Optimization of multi-fidelity computer experiments via the eqie criterion. Technometrics 59(1), 58–68 (2017)

    MathSciNet  Google Scholar 

  60. Henkenjohann, N., Kunert, J.: An efficient sequential optimization approach based on the multivariate expected improvement criterion. Qual. Eng. 19(4), 267–280 (2007)

    Google Scholar 

  61. Hernández-Lobato, J.M., Hoffman, M.W., Ghahramani, Z.: Predictive entropy search for efficient global optimization of black-box functions. In: Neural Information Processing Systems, pp. 918–926 (2014)

  62. Holmström, K.: An adaptive radial basis algorithm (ARBF) for expensive black-box global optimization. J. Global Optim. 41(3), 447–464 (2008)

    MathSciNet  MATH  Google Scholar 

  63. Horn, D., Wagner, T., Biermann, D., Weihs, C., Bischl, B.: Model-based multi-objective optimization: taxonomy, multi-point proposal, toolbox and benchmark. In: Gaspar-Cunha, A., Henggeler Antunes, C., Coello, C.C. (eds.) Evolutionary Multi-criterion Optimization. Lecture Notes in Computer Science, chap. 5, vol. 9018, pp. 64–78. Springer, Berlin (2015)

    Google Scholar 

  64. Horowitz, B., Guimaraes, L.J.D., Dantas, V., Afonso, S.M.B.: A concurrent efficient global optimization algorithm applied to polymer injection strategies. J. Petrol. Sci. Eng. 71(3–4), 195–204 (2010)

    Google Scholar 

  65. Hu, W., Li, G.: Min-Median-Max metamodel-based unconstrained nonlinear optimization problems. Struct. Multidiscip. Optim. 45(3), 401–415 (2012)

    MathSciNet  MATH  Google Scholar 

  66. Huang, C., Li, Y., Yao, X.: A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans. Evol. Comput. 24(2), 201–216 (2020)

    Google Scholar 

  67. Huang, D., Allen, T.T., Notz, W.I., Miller, R.A.: Sequential kriging optimization using multiple-fidelity evaluations. Struct. Multidiscip. Optim. 32(5), 369–382 (2006)

    Google Scholar 

  68. Huang, D., Allen, T.T., Notz, W.I., Zeng, N.: Global optimization of stochastic black-box systems via sequential kriging meta-models. J. Global Optim. 34(3), 441–466 (2006)

    MathSciNet  MATH  Google Scholar 

  69. Hupkens, I., Deutz, A., Yang, K., Emmerich, M.: Faster exact algorithms for computing expected hypervolume improvement. In: Gaspar-Cunha, A., Henggeler Antunes, C., Coello, C.C. (eds.) Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science, chap. 5, vol. 9019, pp. 65–79. Springer, Berlin (2015)

    Google Scholar 

  70. Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Learning and Intelligent Optimization. Lecture Notes in Computer Science, pp. 507–523. Springer, Berlin (2011)

  71. Hutter, F., Hoos, H.H., Leyton-Brown, K., Murphy, K.: Time-bounded sequential parameter optimization. In: Learning and Intelligent Optimization. Lecture Notes in Computer Science, pp. 281–298. Springer, Berlin (2010)

  72. Hutter, F., Hoos, H.H., Leyton-Brown, K., Murphy, K.P.: An experimental investigation of model-based parameter optimisation: SPO and beyond. In: 11th Annual conference on Genetic and Evolutionary Computation, pp. 271–278 (2009)

  73. Jalali, H., Van Nieuwenhuyse, I., Picheny, V.: Comparison of kriging-based algorithms for simulation optimization with heterogeneous noise. Eur. J. Oper. Res. 261(1), 279–301 (2017)

    MathSciNet  MATH  Google Scholar 

  74. Janusevskis, J., Le Riche, R., Ginsbourger, D., Girdziusas, R.: Expected improvements for the asynchronous parallel global optimization of expensive functions: potentials and challenges. In: Hamadi, Y., Schoenauer, M. (eds.) Learning and Intelligent Optimization. Lecture Notes in Computer Science, chap. 37, vol. 7219, pp. 413–418. Springer, Berlin (2012)

    Google Scholar 

  75. Jie, H.X., Wu, Y.Z., Ding, J.W.: An adaptive metamodel-based global optimization algorithm for black-box type problems. Eng. Optim. 47(11), 1459–1480 (2015)

    MathSciNet  Google Scholar 

  76. Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21(4), 345–383 (2001)

    MathSciNet  MATH  Google Scholar 

  77. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)

    MathSciNet  MATH  Google Scholar 

  78. Kanazaki, M., Tanaka, K., Jeong, S., Yamamoto, K.: Multi-objective aerodynamic optimization of elements’ setting for high-lift airfoil using Kriging model. In: 44th AIAA Aerospace Sciences Meeting, vol. 23, pp. 17627–17637. American Institute of Aeronautics and Astronautics Inc. (2006)

  79. Keane, A.J.: Statistical improvement criteria for use in multiobjective design optimization. AIAA J. 44(4), 879–891 (2006)

    Google Scholar 

  80. Kennedy, M.C., O‘Hagan, A.: Predicting the output from a complex computer code when fast approximations are available. Biometrika 87(1), 1–13 (2000)

    MathSciNet  MATH  Google Scholar 

  81. Kleijnen, J.P.C.: Kriging metamodeling in simulation: a review. Eur. J. Oper. Res. 192(3), 707–716 (2009)

    MathSciNet  MATH  Google Scholar 

  82. Kleijnen, J.P.C., van Beers, W., van Nieuwenhuyse, I.: Expected improvement in efficient global optimization through bootstrapped kriging. J. Global Optim. 54(1), 59–73 (2012)

    MathSciNet  MATH  Google Scholar 

  83. Knowles, J.: ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 10(1), 50–66 (2006)

    Google Scholar 

  84. Koch, P., Wagner, T., Emmerich, M.T.M., Back, T., Konen, W.: Efficient multi-criteria optimization on noisy machine learning problems. Appl. Soft Comput. 29, 357–370 (2015)

    Google Scholar 

  85. Koullias, S., Mavris, D.N.: Methodology for global optimization of computationally expensive design problems. J. Mech. Des. 136(8) (2014)

  86. Krityakierne, T., Ginsbourger, D.: Global optimization with sparse and local Gaussian process models. In: International Workshop on Machine Learning, Optimization and Big Data, pp. 185–196 (2015)

  87. Łaniewski-Wołłk, L., Obayashi, S., Jeong, S.: Development of expected improvement for multi-objective problem. In: Proceedings of 42nd Fluid Dynamics Conference/Aerospace Numerical Simulation Symposium (2010)

  88. Leary, S.J., Bhaskar, A., Keane, A.J.: A knowledge-based approach to response surface modelling in multifidelity optimization. J. Global Optim. 26(3), 297–319 (2003)

    MathSciNet  MATH  Google Scholar 

  89. Li, C., Gupta, S., Rana, S., Nguyen, T.V., Venkatesh, S., Shilton, A.: High dimensional Bayesian optimization using dropout. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 2096–2102 (2017)

  90. Li, Z., Ruan, S., Gu, J., Wang, X., Shen, C.: Investigation on parallel algorithms in efficient global optimization based on multiple points infill criterion and domain decomposition. Struct. Multidiscip. Optim. 54(4), 747–773 (2016)

    MathSciNet  Google Scholar 

  91. Li, Z., Wang, X.: A black box method for gate location optimization in plastic injection molding. Adv. Polym. Technol. 32(S1), E793–E808 (2013)

    Google Scholar 

  92. Li, Z., Wang, X., Ruan, S., Li, Z., Shen, C., Zeng, Y.: A modified hypervolume based expected improvement for multi-objective efficient global optimization method. Struct. Multidiscip. Optim. 58(5), 1961–1979 (2018)

    MathSciNet  Google Scholar 

  93. Liu, J., Song, W.P., Han, Z.H., Zhang, Y.: Efficient aerodynamic shape optimization of transonic wings using a parallel infilling strategy and surrogate models. Struct. Multidiscip. Optim. 55(3), 925–943 (2017)

    Google Scholar 

  94. Liu, Y., Chen, S., Wang, F., Xiong, F.: Sequential optimization using multi-level cokriging and extended expected improvement criterion. Struct. Multidiscip. Optim. 58(3), 1155–1173 (2018)

    MathSciNet  Google Scholar 

  95. Lizotte, D., Wang, T., Bowling, M., Schuurmans, D.: Automatic gait optimization with Gaussian process regression. In: Proceedings of the 20th International Joint Conference on Artifical Intelligence, pp. 944–949 (2007)

  96. Luo, C., Shimoyama, K., Obayashi, S.: Kriging model based many-objective optimization with efficient calculation of expected hypervolume improvement. In: IEEE Congress on Evolutionary Computation, pp. 1187–1194 (2014)

  97. Lyu, W., Yang, F., Yan, C., Zhou, D., Zeng, X.: Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design. In: International Conference on Machine Learning, pp. 3312–3320 (2018)

  98. Marmin, S., Chevalier, C., Ginsbourger, D.: Differentiating the multipoint expected improvement for optimal batch design. In: Pardalos, P., Pavone, M., Farinella, G.M., Cutello, V. (eds.) Machine Learning, Optimization, and Big Data. Lecture Notes in Computer Science, vol. 9432, pp. 37–48. Springer, Berlin (2015)

    Google Scholar 

  99. Martínez-Frutos, J., Herrero-Pérez, D.: Kriging-based infill sampling criterion for constraint handling in multi-objective optimization. J. Global Optim. 64(1), 97–115 (2016)

    MathSciNet  MATH  Google Scholar 

  100. Marzat, J., Walter, E., Piet-Lahanier, H.: A new expected-improvement algorithm for continuous minimax optimization. J. Global Optim. 64(4), 785–802 (2016)

    MathSciNet  MATH  Google Scholar 

  101. Mockus, J., Tiesis, V., Zilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Dixon, L.C.W., Szego, G.P. (eds.) Towards Global Optimization, vol. 2, pp. 117–129. North Holland, Amsterdam (1978)

    Google Scholar 

  102. Müller, J., Shoemaker, C.A.: Influence of ensemble surrogate models and sampling strategy on the solution quality of algorithms for computationally expensive black-box global optimization problems. J. Global Optim. 60(2), 123–144 (2014)

    MathSciNet  MATH  Google Scholar 

  103. Namura, N., Shimoyama, K., Obayashi, S.: Expected improvement of penalty-based boundary intersection for expensive multiobjective optimization. IEEE Trans. Evol. Comput. 21(6), 898–913 (2017)

    Google Scholar 

  104. Nayebi, A., Munteanu, A., Poloczek, M.: A framework for Bayesian optimization in embedded subspaces. In: International Conference on Machine Learning, pp. 4752–4761 (2019)

  105. Oh, C., Gavves, E., Welling, M.: Bock: Bayesian optimization with cylindrical kernels. In: International Conference on Machine Learning, pp. 3868–3877 (2018)

  106. Park, C., Haftka, R.T., Kim, N.H.: Remarks on multi-fidelity surrogates. Struct. Multidiscip. Optim. 55(3), 1029–1050 (2017)

    MathSciNet  Google Scholar 

  107. Parr, J.: Improvement criteria for constraint handling and multiobjective optimization. Ph.D. thesis, University of Southampton (2013)

  108. Parr, J., Holden, C.M., Forrester, A.I., Keane, A.J.: Review of efficient surrogate infill sampling criteria with constraint handling. In: 2nd International Conference on Engineering Optimization (2010)

  109. Parr, J.M., Keane, A.J., Forrester, A.I.J., Holden, C.M.E.: Infill sampling criteria for surrogate-based optimization with constraint handling. Eng. Optim. 44(10), 1147–1166 (2012)

    MATH  Google Scholar 

  110. Pelamatti, J., Brevault, L., Balesdent, M., Talbi, E.G., Guerin, Y.: Efficient global optimization of constrained mixed variable problems. J. Global Optim. 73(3), 583–613 (2019)

    MathSciNet  MATH  Google Scholar 

  111. Perdikaris, P., Karniadakis, G.E.: Model inversion via multi-fidelity bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond. J. R. Soc. Interface 13(118), 20151107 (2016)

    Google Scholar 

  112. Picheny, V., Ginsbourger, D., Richet, Y., Caplin, G.: Quantile-based optimization of noisy computer experiments with tunable precision. Technometrics 55(1), 2–13 (2013)

    MathSciNet  Google Scholar 

  113. Picheny, V., Wagner, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidiscip. Optim. 48(3), 607–626 (2013)

    Google Scholar 

  114. Ponweiser, W., Wagner, T., Biermann, D., Vincze, M.: Multiobjective optimization on a limited budget of evaluations using model-assisted s-metric selection. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) Parallel Problem Solving from Nature—PPSN X. Lecture Notes in Computer Science, chap. 78, vol. 5199, pp. 784–794. Springer, Berlin (2008)

    Google Scholar 

  115. Qian, H., Hu, Y.Q., Yu, Y.: Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 1946–1952. AAAI Press (2016)

  116. Qian, P.Z.G., Wu, C.F.J.: Bayesian hierarchical modeling for integrating low-accuracy and high-accuracy experiments. Technometrics 50(2), 192–204 (2008)

    MathSciNet  Google Scholar 

  117. Qian, P.Z.G., Wu, H.Q., Wu, C.F.J.: Gaussian process models for computer experiments with qualitative and quantitative factors. Technometrics 50(3), 383–396 (2008)

    MathSciNet  Google Scholar 

  118. Qin, C., Klabjan, D., Russo, D.: Improving the expected improvement algorithm. In: Neural Information Processing Systems, pp. 5381–5391 (2017)

  119. Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidyanathan, R., Kevin Tucker, P.: Surrogate-based analysis and optimization. Prog. Aerosp. Sci. 41(1), 1–28 (2005)

    MATH  Google Scholar 

  120. Rasmussen, C.E., Williams, C.K.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  121. Regis, R.G.: Trust regions in kriging-based optimization with expected improvement. Eng. Optim. 48, 1–23 (2015)

    MathSciNet  Google Scholar 

  122. Regis, R.G., Shoemaker, C.A.: Constrained global optimization of expensive black box functions using radial basis functions. J. Global Optim. 31(1), 153–171 (2005)

    MathSciNet  MATH  Google Scholar 

  123. Reisenthel, P.H., Allen, T.T.: Application of multifidelity expected improvement algorithms to aeroelastic design optimization. In: 10th AIAA Multidisciplinary Design Optimization Conference (2014)

  124. Ryzhov, I.O.: On the convergence rates of expected improvement methods. Oper. Res. 64(6), 1515–1528 (2016)

    MathSciNet  MATH  Google Scholar 

  125. Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Stat. Sci. 4(4), 409–423 (1989)

    MathSciNet  MATH  Google Scholar 

  126. Santner, T.J., Williams, B.J., Notz, W.: The Design and Analysis of Computer Experiments. Springer, Berlin (2018)

    MATH  Google Scholar 

  127. Sasena, M.J.: Flexibility and efficiency enhancements for constrained global design optimization with Kriging approximations. Ph.D. thesis, University of Michigan (2002)

  128. Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)

    Google Scholar 

  129. Sasena, M.J., Papalambros, P.Y., Goovaerts, P.: The use of surrogate modeling algorithms to exploit disparities in function computation time within simulation-based optimization. In: The 4th World Congress of Structural and Multidisciplinary Optimization. Citeseer (2001)

  130. Schonlau, M.: Computer experiments and global optimization. Ph.D. thesis, University of Waterloo (1997)

  131. Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., Freitas, Nd: Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104(1), 148–175 (2016)

    Google Scholar 

  132. Shimoyama, K., Jeong, S., Obayashi, S.: Kriging-surrogate-based optimization considering expected hypervolume improvement in non-constrained many-objective test problems. In: IEEE Congress on Evolutionary Computation, pp. 658–665 (2013)

  133. Shimoyama, K., Sato, K., Jeong, S., Obayashi, S.: Comparison of the criteria for updating kriging response surface models in multi-objective optimization. In: IEEE Congress on Evolutionary Computation, pp. 1–8 (2012)

  134. Shinkyu, J., Obayashi, S.: Efficient global optimization (EGO) for multi-objective problem and data mining. In: 2005 IEEE Congress on Evolutionary Computation, pp. 2138–2145 (2005)

  135. Simpson, T.W., Booker, A.J., Ghosh, D., Giunta, A.A., Koch, P.N., Yang, R.J.: Approximation methods in multidisciplinary analysis and optimization: a panel discussion. Struct. Multidiscip. Optim. 27(5), 302–313 (2004)

    Google Scholar 

  136. Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Neural Information Processing Systems, pp. 2951–2959 (2012)

  137. Sóbester, A., Leary, S.J., Keane, A.J.: A parallel updating scheme for approximating and optimizing high fidelity computer simulations. Struct. Multidiscip. Optim. 27(5), 371–383 (2004)

    Google Scholar 

  138. Sóbester, A., Leary, S.J., Keane, A.J.: On the design of optimization strategies based on global response surface approximation models. J. Global Optim. 33(1), 31–59 (2005)

    MathSciNet  MATH  Google Scholar 

  139. Springenberg, J.T., Klein, A., Falkner, S., Hutter, F.: Bayesian optimization with robust bayesian neural networks. In: Neural Information Processing Systems, pp. 4134–4142 (2016)

  140. Suprayitno, Yu J.C.: Evolutionary reliable regional kriging surrogate for expensive optimization. Eng. Optim. 51(2), 247–264 (2018)

    MathSciNet  Google Scholar 

  141. Svenson, J., Santner, T.: Multiobjective optimization of expensive-to-evaluate deterministic computer simulator models. Comput. Stat. Data Anal. 94, 250–264 (2016)

    MathSciNet  MATH  Google Scholar 

  142. Svenson, J.D.: Computer experiments: multiobjective optimization and sensitivity analysis. Ph.D. thesis, The Ohio State University (2011)

  143. Tuo, R., Wu, C.F.J., Yu, D.: Surrogate modeling of computer experiments with different mesh densities. Technometrics 56(3), 372–380 (2014)

    MathSciNet  Google Scholar 

  144. Tutum, C.C., Deb, K., Baran, I.: Constrained efficient global optimization for pultrusion process. Mater. Manuf. Processes 30(4), 538–551 (2015)

    Google Scholar 

  145. Ulmasov, D., Baroukh, C., Chachuat, B., Deisenroth, M.P., Misener, R.: Bayesian optimization with dimension scheduling: application to biological systems. In: Kravanja, Z., Bogataj, M. (eds.) Computer Aided Chemical Engineering, vol. 38, pp. 1051–1056. Elsevier, Amsterdam (2016)

    Google Scholar 

  146. Vazquez, E., Bect, J.: Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J. Stat. Plan. Inference 140(11), 3088–3095 (2010)

    MathSciNet  MATH  Google Scholar 

  147. Vazquez, E., Villemonteix, J., Sidorkiewicz, M., Walter, E.: Global optimization based on noisy evaluations: an empirical study of two statistical approaches. J. Phys. Conf. Ser. 135, 012100 (2008)

    Google Scholar 

  148. Venturelli, G., Benini, E., Łaniewski-Wołłk, L.: A kriging-assisted multiobjective evolutionary algorithm. Appl. Soft Comput. 58, 155–175 (2017)

    Google Scholar 

  149. Viana, F.A., Haftka, R.T., Watson, L.T.: Efficient global optimization algorithm assisted by multiple surrogate techniques. J. Global Optim. 56(02), 669–689 (2013)

    MATH  Google Scholar 

  150. Viana, F.A.C., Simpson, T.W., Balabanov, V., Toropov, V.: Metamodeling in multidisciplinary design optimization: how far have we really come? AIAA J. 52(4), 670–690 (2014)

    Google Scholar 

  151. Villarreal-Marroquín, M.G., Svenson, J.D., Sun, F., Santner, T.J., Dean, A., Castro, J.M.: A comparison of two metamodel-based methodologies for multiple criteria simulation optimization using an injection molding case study. J. Polym. Eng. 33(3), 193–209 (2013)

    Google Scholar 

  152. Villemonteix, J., Vazquez, E., Sidorkiewicz, M., Walter, E.: Global optimization of expensive-to-evaluate functions: an empirical comparison of two sampling criteria. J. Global Optim. 43(2–3), 373–389 (2009)

    MathSciNet  MATH  Google Scholar 

  153. Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Global Optim. 44(4), 509–534 (2009)

    MathSciNet  MATH  Google Scholar 

  154. Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) Parallel Problem Solving from Nature, PPSN XI. Lecture Notes in Computer Science, chap. 72, vol. 6238, pp. 718–727. Springer, Berlin (2010)

    Google Scholar 

  155. Wang, G.G., Shan, S.: Review of metamodeling techniques in support of engineering design optimization. J. Mech. Des. 129(2), 370–380 (2007)

    Google Scholar 

  156. Wang, H., Ye, F., Li, E., Li, G.: A comparative study of expected improvement-assisted global optimization with different surrogates. Eng. Optim. 48(8), 1432–1458 (2016)

    Google Scholar 

  157. Wang, Y., Han, Z.H., Zhang, Y., Song, W.P.: Efficient global optimization using multiple infill sampling criteria and surrogate models. In: 2018 AIAA Aerospace Sciences Meeting, AIAA SciTech Forum. American Institute of Aeronautics and Astronautics (2018). https://doi.org/10.2514/6.2018-0555

  158. Wang, Z., Hutter, F., Zoghi, M., Matheson, D., de Feitas, N.: Bayesian optimization in a billion dimensions via random embeddings. J. Artif. Intell. Res. 55, 361–387 (2016)

    MathSciNet  MATH  Google Scholar 

  159. Wang, Z., Li, C., Jegelka, S., Kohli, P.: Batched high-dimensional Bayesian optimization via structural kernel learning. In: Proceedings of the 34th International Conference on Machine Learning, pp. 3656–3664 (2017)

  160. Wang, Z., Zoghi, M., Hutter, F., Matheson, D., De Freitas, N.: Bayesian optimization in high dimensions via random embeddings. In: 23rd International Joint Conference on Artificial Intelligence (2013)

  161. Williams, B.J., Santner, T.J., Notz, W.I.: Sequential design of computer experiments to minimize integrated response functions. Stat. Sin. 10(4), 1133–1152 (2000)

    MathSciNet  MATH  Google Scholar 

  162. Xiao, M., Zhang, G., Breitkopf, P., Villon, P., Zhang, W.: Extended co-kriging interpolation method based on multi-fidelity data. Appl. Math. Comput. 323, 120–131 (2018)

    MATH  Google Scholar 

  163. Xu, S., Chen, H.: Nash game based efficient global optimization for large-scale design problems. J. Global Optim. 71(2), 361–381 (2018)

    MathSciNet  MATH  Google Scholar 

  164. Xu, S., Chen, H., Zhang, J.: A study of Nash-EGO algorithm for aerodynamic shape design optimizations. Struct. Multidiscip. Optim. 59(4), 1241–1254 (2019)

    Google Scholar 

  165. Yang, K., Deutz, A., Yang, Z., Back, T., Emmerich, M.: Truncated expected hypervolume improvement: Exact computation and application. In: IEEE Congress on Evolutionary Computation, pp. 4350–4357 (2016)

  166. Yang, K., Emmerich, M., Deutz, A., Fonseca, C.M.: Computing 3-d expected hypervolume improvement and related integrals in asymptotically optimal time. In: Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science, pp. 685–700. Springer, Cham (2017)

  167. Yang, K., Gaida, D., Back, T., Emmerich, M.: Expected hypervolume improvement algorithm for pid controller tuning and the multiobjective dynamical control of a biogas plant. In: IEEE Congress on Evolutionary Computation, pp. 1934–1942 (2015)

  168. Yang, Z., Qiu, H., Gao, L., Jiang, C., Zhang, J.: Two-layer adaptive surrogate-assisted evolutionary algorithm for high-dimensional computationally expensive problems. J. Global Optim. 74(2), 327–359 (2019)

    MathSciNet  MATH  Google Scholar 

  169. Yarotsky, D.: Examples of inconsistency in optimization by expected improvement. J. Global Optim. 56(4), 1773–1790 (2013)

    MathSciNet  MATH  Google Scholar 

  170. Yuan, B., Liu, L., Long, T., Shi, R.: Efficient global optimization strategy considering expensive constraints. In: Schumacher, A., Vietor, T., Fiebig, S., Bletzinger, K.U., Maute, K. (eds.) Advances in Structural and Multidisciplinary Optimization: Proceedings of the 12th World Congress of Structural and Multidisciplinary Optimization (WCSMO12), pp. 133–142. Springer, Cham (2018)

    Google Scholar 

  171. Yuan, Y., Xu, H., Wang, B., Yao, X.: A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 20(1), 16–37 (2016)

    Google Scholar 

  172. Zhan, D., Cheng, Y., Liu, J.: Expected improvement matrix-based infill criteria for expensive multiobjective optimization. IEEE Trans. Evol. Comput. 21(6), 956–975 (2017)

    Google Scholar 

  173. Zhan, D., Qian, J., Cheng, Y.: Balancing global and local search in parallel efficient global optimization algorithms. J. Global Optim. 67(4), 873–892 (2017)

    MathSciNet  MATH  Google Scholar 

  174. Zhan, D., Qian, J., Cheng, Y.: Pseudo expected improvement criterion for parallel EGO algorithm. J. Global Optim. 68(3), 641–662 (2017)

    MathSciNet  MATH  Google Scholar 

  175. Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007)

    Google Scholar 

  176. Zhang, Q., Liu, W., Tsang, E., Virginas, B.: Expensive multiobjective optimization by MOEA/D with gaussian process model. IEEE Trans. Evol. Comput. 14(3), 456–474 (2010)

    Google Scholar 

  177. Zhang, S., Lyu, W., Yang, F., Yan, C., Zhou, D., Zeng, X., Hu, X.: An efficient multi-fidelity bayesian optimization approach for analog circuit synthesis. In: Proceedings of the 56th Annual Design Automation Conference (2019)

  178. Zhang, Y., Han, Z.H., Zhang, K.S.: Variable-fidelity expected improvement method for efficient global optimization of expensive functions. Struct. Multidiscip. Optim. 58(4), 1431–1451 (2018)

    MathSciNet  Google Scholar 

  179. Zhou, Q., Qian, P.Z.G., Zhou, S.Y.: A simple approach to emulation for computer models with qualitative and quantitative factors. Technometrics 53(3), 266–273 (2011)

    MathSciNet  Google Scholar 

  180. Zhu, C., Xu, L., Goodman, E.D.: Generalization of pareto-optimality for many-objective evolutionary optimization. IEEE Trans. Evol. Comput. 20(2), 299–315 (2016)

    Google Scholar 

  181. Zuhal, L.R., Palar, P.S., Shimoyama, K.: A comparative study of multi-objective expected improvement for aerodynamic design. Aerosp. Sci. Technol. 91, 548–560 (2019)

    Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dawei Zhan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhan, D., Xing, H. Expected improvement for expensive optimization: a review. J Glob Optim 78, 507–544 (2020). https://doi.org/10.1007/s10898-020-00923-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-020-00923-x

Keywords

Navigation