Expected Improvements for the Asynchronous Parallel Global Optimization of Expensive Functions: Potentials and Challenges

  • Janis Janusevskis
  • Rodolphe Le Riche
  • David Ginsbourger
  • Ramunas Girdziusas
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7219)


Sequential sampling strategies based on Gaussian processes are now widely used for the optimization of problems involving costly simulations. But Gaussian processes can also generate parallel optimization strategies. We focus here on a new, parameter free, parallel expected improvement criterion for asynchronous optimization. An estimation of the criterion, which mixes Monte Carlo sampling and analytical bounds, is proposed. Logarithmic speed-ups are measured on 1 and 9 dimensional functions.


Gaussian Process Kriging Model Work Node Parallel Optimization Expensive Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Berbecea, A.C., Kreuawan, S., Gillon, F., Brochet, P.: A Parallel Multiobjective Efficient Global Optimization: The Finite Element Method in Optimal Design and Model Development. IEEE Transactions on Magnetics 46(8), 2868–2871 (2010)CrossRefGoogle Scholar
  2. 2.
    Branke, J., Kamper, A., Schmeck, H.: Distribution of Evolutionary Algorithms in Heterogeneous Networks. In: Deb, K., Tari, Z. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 923–934. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Ginsbourger, D., Le Riche, R., Carraro, L.: Kriging is well-suited to parallelize optimization. In: Tenne, Y., Goh, C.-K. (eds.) Computational Intelligence in Expensive Optimization Problems. Springer series in Evolutionary Learning and Optimization, pp. 131–162 (2009)Google Scholar
  4. 4.
    Ginsbourger, D., Janusevskis, J., Le Riche, R.: Dealing with asynchronicity in parallel Gaussian Process based global optimization. HAL technical report no. hal-00507632 (July 2010),
  5. 5.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  6. 6.
    Kolda, T.G.: Revisiting asynchronous parallel pattern search for nonlinear optimization. SIAM J. Optimization 16(2), 563–586 (2005)zbMATHMathSciNetCrossRefGoogle Scholar
  7. 7.
    Janusevskis, J., Le Riche, R., Ginsbourger, D.: Parallel expected improvements for global optimization: summary, bounds and speed-up. HAL technical report no. hal-00613971 (August 2011),
  8. 8.
    Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13(4), 455–492 (1998)zbMATHMathSciNetCrossRefGoogle Scholar
  9. 9.
    Regis, R.G., Shoemaker, C.A.: Parallel radial basis function methods for the global optimization of expensive functions. European J. of Operational Research 182, 514–535 (2007)zbMATHMathSciNetCrossRefGoogle Scholar
  10. 10.
    Sobester, A., Leary, S.J., Keane, A.J.: A parallel updating scheme for approximating and optimizing high fidelity computer simulations. J. of Structural and Multidisciplinary Optimization 27, 371–383 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Janis Janusevskis
    • 1
  • Rodolphe Le Riche
    • 1
    • 2
  • David Ginsbourger
    • 3
  • Ramunas Girdziusas
    • 1
  1. 1.Ecole des Mines de Saint-EtienneSaint-EtienneFrance
  2. 2.CNRS, UMR 5146 Cl.GouxFrance
  3. 3.Departement of Mathematics and StatisticsUniversity of BernSwitzerland

Personalised recommendations