Pseudo expected improvement criterion for parallel EGO algorithm
The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.
KeywordsEfficient global optimization Expected improvement Parallel computing Pseudo expected improvement Influence function
We would like to thank the anonymous reviewers for their helpful comments.
- 13.Ginsbourger, D., Le Riche, R., Carraro, L.: Kriging is well-suited to parallelize optimization. In: Tenne, Y., Goh, C.-K. (eds.) Computational Intelligence in Expensive Optimization Problems. Adaptation Learning and Optimization, vol. 2, pp. 131–162. Springer, Berlin (2010)Google Scholar
- 17.Bischl, B., Wessing, S., Bauer, N., Friedrichs, K., Weihs, C.: MOI-MBO: multiobjective infill for parallel Mmodel-based optimization. In: Pardalos, P.M., Resende, M.G.C., Vogiatzis, C., Walteros, J.L. (eds.) Learning and Intelligent Optimization. Lecture Notes in Computer Science, pp. 173–186. Springer, Berlin (2014)Google Scholar
- 19.Hutter, F., Hoos, H., Leyton-Brown, K.: Parallel algorithm configuration. In: Hamadi, Y., Schoenauer, M. (eds.) Learning and Intelligent Optimization. Lecture Notes in Computer Science, pp. 55–70. Springer, Berlin (2012)Google Scholar
- 23.Dixon, L.C.W., Szego, G.P.: The optimization problem: an introduction. In: Dixon, L.C.W., Szego, G.P. (eds.) Towards Global Optimization II. North Holland, New York (1978)Google Scholar
- 24.Sasena, M.J.: Flexibility and Efficiency Enhancements for Constrained Global Design Optimization with Kriging Approximations. University of Michigan, Ann Arbor (2002)Google Scholar
- 25.Lophaven, S.N., Nielsen, H.B., Søndergaard, J.: Dace—a matlab Kriging toolbox. Technical Report IMM TR C2002 C12, Technical University of Denmark, Denmark (2002). http://www2.imm.dtu.dk/~hbn/dace/
- 26.Viana F.A.C.: SURROGATES Toolbox Users Guide. Gainesville, FL, USA, version 3.0 edn. (2011). http://sites.google.com/site/felipeacviana/surrogatestoolbox
- 27.Price, K.V., Storn, R.M., Lampinen, J.A.: Differential Evolution: A Practical Approach to Global Optimization. Springer, Berlin (2005). http://www1.icsi.berkeley.edu/~storn/code.html