Differentiating the Multipoint Expected Improvement for Optimal Batch Design

  • Sébastien MarminEmail author
  • Clément Chevalier
  • David Ginsbourger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9432)


This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of \(q > 0\) arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batch-sequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.


Bayesian optimization Batch-sequential design GP UCB 



Part of this work has been conducted within the frame of the ReDice Consortium, gathering industrial (CEA, EDF, IFPEN, IRSN, Renault) and academic (École des Mines de Saint-Étienne, INRIA, and the University of Bern) partners around advanced methods for Computer Experiments.


  1. 1.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002)zbMATHCrossRefGoogle Scholar
  2. 2.
    Azzalini, A., Genz, A.: The R package mnormt: the multivariate normal and \(t\) distributions (version 1.5-1) (2014)Google Scholar
  3. 3.
    Bect, J., Ginsbourger, D., Li, L., Picheny, V., Vazquez, E.: Sequential design of computer experiments for the estimation of a probability of failure. Stat. Comput. 22(3), 773–793 (2011)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Berman, S.M.: An extension of Plackett’s differential equation for the multivariate normal density. SIAM J. Algebr. Discrete Methods 8(2), 196–197 (1987)zbMATHCrossRefGoogle Scholar
  5. 5.
    Brochu, E., Cora, M., de Freitas, N.: A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning, December 2010. eprint arXiv:1012.2599
  6. 6.
    Bull, A.: Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. 12, 2879–2904 (2011)zbMATHMathSciNetGoogle Scholar
  7. 7.
    Chevalier, C.: Fast uncertainty reduction strategies relying on Gaussian process models. Ph.D. thesis, University of Bern (2013)Google Scholar
  8. 8.
    Chevalier, C., Ginsbourger, D.: Fast computation of the multipoint expected improvement with applications in batch selection. In: Giuseppe, N., Panos, P. (eds.) Learning and Intelligent Optimization. Springer, Heidelberg (2014)Google Scholar
  9. 9.
    Ginsbourger, D., Picheny, V., Roustant, O., with contributions by Chevalier, C., Marmin, S., Wagner, T.: DiceOptim: Kriging-based optimization for computer experiments. R package version 1.5 (2015)Google Scholar
  10. 10.
    Desautels, T., Krause, A., Burdick, J.: Parallelizing exploration-exploitation tradeoffs with gaussian process bandit optimization. In: ICML (2012)Google Scholar
  11. 11.
    Frazier, P.I.: Parallel global optimization using an improved multi-points expected improvement criterion. In: INFORMS Optimization Society Conference, Miami FL (2012)Google Scholar
  12. 12.
    Frazier, P.I., Powell, W.B., Dayanik, S.: A knowledge-gradient policy for sequential information collection. SIAM J. Control Optim. 47(5), 2410–2439 (2008)zbMATHMathSciNetCrossRefGoogle Scholar
  13. 13.
    Genz, A.: Numerical computation of multivariate normal probabilities. J. Comput. Graph. Stat. 1, 141–149 (1992)Google Scholar
  14. 14.
    Ginsbourger, D., Le Riche, R.: Towards gaussian process-based optimization with finite time horizon. In: Giovagnoli, A., Atkinson, A.C., Torsney, B., May, C. (eds.) mODa 9 Advances in Model-Oriented Design and Analysis, Contributions to Statistics, pp. 89–96. Physica-Verlag, HD (2010)CrossRefGoogle Scholar
  15. 15.
    Ginsbourger, D., Le Riche, R., Carraro, L.: Kriging is well-suited to parallelize optimization. In: Tenne, Y., Goh, C.-K. (eds.) Computational Intelligence in Expensive Optimization Problems. ALO, vol. 2, pp. 131–162. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  16. 16.
    Jones, D.R., Schonlau, M., William, J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)zbMATHMathSciNetCrossRefGoogle Scholar
  17. 17.
    Kenny, Q.Y., Li, W., Sudjianto, A.: Algorithmic construction of optimal symmetric latin hypercube designs. J. Stat. Plann. Inf. 90(1), 145–159 (2000)zbMATHCrossRefGoogle Scholar
  18. 18.
    Mebane, W., Sekhon, J.: Genetic optimization using derivatives: the rgenoud package for R. J. Stat. Softw. 42(11), 1–26 (2011)CrossRefGoogle Scholar
  19. 19.
    Mockus, J., Tiesis, V., Zilinskas, A.: The application of bayesian methods for seeking the extremum. In: Dixon, L., Szego, G. (eds.) Towards Global Optimization, vol. 2, pp. 117–129. Elsevier, Amsterdam (1978)Google Scholar
  20. 20.
    Rasmussen, C.R., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)zbMATHGoogle Scholar
  21. 21.
    Roustant, O., Ginsbourger, D., Deville, Y.: DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by Kriging-based metamodelling and optimization. J. Stat. Softw. 51(1), 1–55 (2012)CrossRefGoogle Scholar
  22. 22.
    Schonlau, M.: Computer experiments and global optimization. Ph.D. thesis, University of Waterloo (1997)Google Scholar
  23. 23.
    Srinivas, N., Krause, A., Kakade, S., Seeger, M.: Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory 58(5), 3250–3265 (2012)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Vazquez, E., Bect, J.: Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J. Stat. Plan. Infer. 140(11), 3088–3095 (2010)zbMATHMathSciNetCrossRefGoogle Scholar
  25. 25.
    Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Global Optim. 44(4), 509–534 (2009)zbMATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Sébastien Marmin
    • 1
    • 2
    • 3
    Email author
  • Clément Chevalier
    • 4
    • 5
  • David Ginsbourger
    • 1
    • 6
  1. 1.Department of Mathematics and Statistics, IMSVUniversity of BernBernSwitzerland
  2. 2.Institut de Radioprotection et de Sûreté NucléaireCadaracheFrance
  3. 3.École Centrale de MarseilleMarseilleFrance
  4. 4.Institute of StatisticsUniversity of NeuchâtelNeuchâtelSwitzerland
  5. 5.Institute of MathematicsUniversity of ZurichZürichSwitzerland
  6. 6.Idiap Research InstituteMartignySwitzerland

Personalised recommendations