Skip to main content

A Bayesian Approach to Constrained Multi-objective Optimization

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8994))

Included in the following conference series:

Abstract

This paper addresses the problem of derivative-free multi-objective optimization of real-valued functions under multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, nonlinear, expensive-to-evaluate functions. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited. The method we propose to overcome this difficulty has its roots in the Bayesian and multi-objective optimization literatures. More specifically, we make use of an extended domination rule taking both constraints and objectives into account under a unified multi-objective framework and propose a generalization of the expected improvement sampling criterion adapted to the problem. A proof of concept on a constrained multi-objective optimization test problem is given as an illustration of the effectiveness of the method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Au, S.K., Beck, J.L.: Estimation of small failure probabilities in high dimensions by subset simulation. Probab. Eng. Mech. 16(4), 263–277 (2001)

    Article  Google Scholar 

  2. Bader, J., Zitzler, E.: Hype: An algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 19(1), 45–76 (2011)

    Article  Google Scholar 

  3. Benassi, R., Bect, J., Vazquez, E.: Bayesian optimization using sequential Monte Carlo. In: Hamadi, Y., Schoenauer, M. (eds.) LION 2012. LNCS, vol. 7219, pp. 339–342. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  4. Emmerich, M.T.M., Giannakoglou, K.C., Naujoks, B.: Single- and multi-objective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006)

    Article  Google Scholar 

  5. Emmerich, M., Klinkenberg, J.W.: The computation of the expected improvement in dominated hypervolume of Pareto front approximations. Leiden University, Rapport Technique (2008)

    Google Scholar 

  6. Fonseca, C.M., Fleming, P.J.: Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans. Syst. Man Cybern. B Cybern. Part A: Syst. Hum. 28(1), 26–37 (1998)

    Google Scholar 

  7. Gramacy, R.L., Lee, H.: Optimization under unknown constraints. In: Bayesian Statistics 9. In: Proceedings of the Ninth Valencia International Meeting, pp. 229–256. Oxford University Press (2011)

    Google Scholar 

  8. Hupkens, I., Emmerich, M., Deutz, A.: Faster computation of expected hypervolume improvement. arXiv preprint arXiv:1408.7114 (2014)

  9. Jeong, S., Minemura, Y., Obayashi, S.: Optimization of combustion chamber for diesel engine using kriging model. J. Fluid Sci. Technol. 1(2), 138–146 (2006)

    Article  Google Scholar 

  10. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  11. Li, L., Bect, J., Vazquez, E.: Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure. In: Proceedings of PSAM 2011 & ESREL 2012, 25–29 June 2012, Helsinki, Finland. IAPSAM (2012)

    Google Scholar 

  12. Liu, J.S.: Monte Carlo strategies in scientific computing. Springer, Heidelberg (2008)

    Google Scholar 

  13. Mockus, J.: Application of bayesian approach to numerical methods of global and stochastic optimization. J. Global Optim. 4(4), 347–365 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  14. Osyczka, A., Kundu, S.: A new method to solve generalized multicriteria optimization problems using the simple genetic algorithm. Struct. Optim. 10(2), 94–99 (1995)

    Article  Google Scholar 

  15. Oyama, A., Shimoyama, K., Fujii, K.: New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans. Jpn. Soc. Aeronaut. Space Sci. 50(167), 56–62 (2007)

    Article  Google Scholar 

  16. Parr, J.M., Keane, A.J., Forrester, A.I.J., Holden, C.M.E.: Infill sampling criteria for surrogate-based optimization with constraint handling. Eng. Optim. 44(10), 1147–1166 (2012)

    Article  Google Scholar 

  17. Picheny, V.: Multiobjective optimization using Gaussian process emulators via stepwise uncertainty reduction. Stat. Comput. 16 p. (2014). doi:10.1007/s11222-014-9477-x

  18. Picheny, V.: A stepwise uncertainty reduction approach to constrained global optimization. In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), Reykjavik, Iceland. vol. 33, pp. 787–795. JMLR: W&CP (2014)

    Google Scholar 

  19. Ray, T., Tai, K., Seow, K.C.: Multiobjective design optimization by an evolutionary algorithm. Eng. Optim. 33(4), 399–424 (2001)

    Article  Google Scholar 

  20. Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)

    Article  Google Scholar 

  21. Schonlau, M., Welch, W.J., Jones, D.R.: Global versus local search in constrained optimization of computer models. In: New Developments and Applications in Experimental Design: Selected Proceedings of a 1997 Joint AMS-IMS-SIAM Summer Conference. IMS Lecture Notes-Monographs Series, vol. 34, pp. 11–25. Institute of Mathematical Statistics (1998)

    Google Scholar 

  22. Shimoyama, K., Sato, K., Jeong, S., Obayashi, S.: Updating kriging surrogate models based on the hypervolume indicator in multi-objective optimization. J. Mech. Des. 135(9), 094503 (2013)

    Article  Google Scholar 

  23. Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6238, pp. 718–727. Springer, Heidelberg (2010)

    Google Scholar 

Download references

Acknowledgements

This research work has been carried out in the frame of the Technological Research Institute SystemX, and therefore granted with public funds within the scope of the French Program Investissements d’Avenir.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Feliot .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Feliot, P., Bect, J., Vazquez, E. (2015). A Bayesian Approach to Constrained Multi-objective Optimization. In: Dhaenens, C., Jourdan, L., Marmion, ME. (eds) Learning and Intelligent Optimization. LION 2015. Lecture Notes in Computer Science(), vol 8994. Springer, Cham. https://doi.org/10.1007/978-3-319-19084-6_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19084-6_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19083-9

  • Online ISBN: 978-3-319-19084-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics