A Bayesian Approach to Constrained Multi-objective Optimization

  • Paul FeliotEmail author
  • Julien Bect
  • Emmanuel Vazquez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8994)


This paper addresses the problem of derivative-free multi-objective optimization of real-valued functions under multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, nonlinear, expensive-to-evaluate functions. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited. The method we propose to overcome this difficulty has its roots in the Bayesian and multi-objective optimization literatures. More specifically, we make use of an extended domination rule taking both constraints and objectives into account under a unified multi-objective framework and propose a generalization of the expected improvement sampling criterion adapted to the problem. A proof of concept on a constrained multi-objective optimization test problem is given as an illustration of the effectiveness of the method.


Multiple Inequality Constraints Expected Improvement Criterion Constraint Functions Unconstrained Multi-objective Problems Vector-valued Gaussian Process 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This research work has been carried out in the frame of the Technological Research Institute SystemX, and therefore granted with public funds within the scope of the French Program Investissements d’Avenir.


  1. 1.
    Au, S.K., Beck, J.L.: Estimation of small failure probabilities in high dimensions by subset simulation. Probab. Eng. Mech. 16(4), 263–277 (2001)CrossRefGoogle Scholar
  2. 2.
    Bader, J., Zitzler, E.: Hype: An algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 19(1), 45–76 (2011)CrossRefGoogle Scholar
  3. 3.
    Benassi, R., Bect, J., Vazquez, E.: Bayesian optimization using sequential Monte Carlo. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, vol. 7219, pp. 339–342. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  4. 4.
    Emmerich, M.T.M., Giannakoglou, K.C., Naujoks, B.: Single- and multi-objective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006)CrossRefGoogle Scholar
  5. 5.
    Emmerich, M., Klinkenberg, J.W.: The computation of the expected improvement in dominated hypervolume of Pareto front approximations. Leiden University, Rapport Technique (2008)Google Scholar
  6. 6.
    Fonseca, C.M., Fleming, P.J.: Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans. Syst. Man Cybern. B Cybern. Part A: Syst. Hum. 28(1), 26–37 (1998)Google Scholar
  7. 7.
    Gramacy, R.L., Lee, H.: Optimization under unknown constraints. In: Bayesian Statistics 9. In: Proceedings of the Ninth Valencia International Meeting, pp. 229–256. Oxford University Press (2011)Google Scholar
  8. 8.
    Hupkens, I., Emmerich, M., Deutz, A.: Faster computation of expected hypervolume improvement. arXiv preprint arXiv:1408.7114 (2014)
  9. 9.
    Jeong, S., Minemura, Y., Obayashi, S.: Optimization of combustion chamber for diesel engine using kriging model. J. Fluid Sci. Technol. 1(2), 138–146 (2006)CrossRefGoogle Scholar
  10. 10.
    Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Li, L., Bect, J., Vazquez, E.: Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure. In: Proceedings of PSAM 2011 & ESREL 2012, 25–29 June 2012, Helsinki, Finland. IAPSAM (2012)Google Scholar
  12. 12.
    Liu, J.S.: Monte Carlo strategies in scientific computing. Springer, Heidelberg (2008)Google Scholar
  13. 13.
    Mockus, J.: Application of bayesian approach to numerical methods of global and stochastic optimization. J. Global Optim. 4(4), 347–365 (1994)CrossRefzbMATHMathSciNetGoogle Scholar
  14. 14.
    Osyczka, A., Kundu, S.: A new method to solve generalized multicriteria optimization problems using the simple genetic algorithm. Struct. Optim. 10(2), 94–99 (1995)CrossRefGoogle Scholar
  15. 15.
    Oyama, A., Shimoyama, K., Fujii, K.: New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans. Jpn. Soc. Aeronaut. Space Sci. 50(167), 56–62 (2007)CrossRefGoogle Scholar
  16. 16.
    Parr, J.M., Keane, A.J., Forrester, A.I.J., Holden, C.M.E.: Infill sampling criteria for surrogate-based optimization with constraint handling. Eng. Optim. 44(10), 1147–1166 (2012)CrossRefGoogle Scholar
  17. 17.
    Picheny, V.: Multiobjective optimization using Gaussian process emulators via stepwise uncertainty reduction. Stat. Comput. 16 p. (2014). doi: 10.1007/s11222-014-9477-x
  18. 18.
    Picheny, V.: A stepwise uncertainty reduction approach to constrained global optimization. In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), Reykjavik, Iceland. vol. 33, pp. 787–795. JMLR: W&CP (2014)Google Scholar
  19. 19.
    Ray, T., Tai, K., Seow, K.C.: Multiobjective design optimization by an evolutionary algorithm. Eng. Optim. 33(4), 399–424 (2001)CrossRefGoogle Scholar
  20. 20.
    Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)CrossRefGoogle Scholar
  21. 21.
    Schonlau, M., Welch, W.J., Jones, D.R.: Global versus local search in constrained optimization of computer models. In: New Developments and Applications in Experimental Design: Selected Proceedings of a 1997 Joint AMS-IMS-SIAM Summer Conference. IMS Lecture Notes-Monographs Series, vol. 34, pp. 11–25. Institute of Mathematical Statistics (1998)Google Scholar
  22. 22.
    Shimoyama, K., Sato, K., Jeong, S., Obayashi, S.: Updating kriging surrogate models based on the hypervolume indicator in multi-objective optimization. J. Mech. Des. 135(9), 094503 (2013)CrossRefGoogle Scholar
  23. 23.
    Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6238, pp. 718–727. Springer, Heidelberg (2010) Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.IRT SystemXPalaiseauFrance
  2. 2.SUPELECGif-sur-yvetteFrance

Personalised recommendations