Journal of Global Optimization

, Volume 67, Issue 1–2, pp 97–133 | Cite as

A Bayesian approach to constrained single- and multi-objective optimization

Article

Abstract

This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria—the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions—as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization.

Keywords

Bayesian optimization Expected improvement Kriging Gaussian process Multi-objective Sequential Monte Carlo Subset simulation 

References

  1. 1.
    Andrieu, C., Roberts, G.O.: The pseudo-marginal approach for efficient monte carlo computations. Ann. Stat. 37(2), 697–725 (2009)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Andrieu, C., Thoms, J.: A tutorial on adaptive mcmc. Stat. Comput. 18(4), 343–373 (2008)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Archetti, F., Betrò, B.: A probabilistic algorithm for global optimization. CALCOLO 16(3), 335–343 (1979)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Au, S.-K., Beck, J.L.: Estimation of small failure probabilities in high dimensions by subset simulation. Probab. Eng. Mech 16(4), 263–277 (2001)CrossRefGoogle Scholar
  5. 5.
    Bader, J., Zitzler, E.: Hype: an algorithm for fast hypervolume-based many-objective optimization. Evolut. Comput. 19(1), 45–76 (2011)CrossRefGoogle Scholar
  6. 6.
    Bautista, D.C.: A Sequential Design for Approximating the Pareto Front Using the Expected Pareto Improvement Function. PhD thesis, The Ohio State University (2009)Google Scholar
  7. 7.
    Bect, J., Ginsbourger, D., Li, L., Picheny, V., Vazquez, E.: Sequential design of computer experiments for the estimation of a probability of failure. Stat. Comput 22(3), 773–793 (2012)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Bect, J., Vazquez, E. et al.: STK: a Small (Matlab/Octave) Toolbox for Kriging. Release 2.4 (to appear), (2016). URL http://kriging.sourceforge.net
  9. 9.
    Benassi, R.: Nouvel Algorithme d’optimisation Bayésien Utilisant une Approche Monte-Carlo séquentielle. PhD thesis, Supélec (2013)Google Scholar
  10. 10.
    Benassi, R., Bect, J., Vazquez, E.: Bayesian optimization using sequential Monte Carlo. In: Learning and Intelligent Optimization. 6th International Conference, LION 6, Paris, France, 16–20 January 2012, Revised Selected Papers, volume 7219 of Lecture Notes in Computer Science, pp. 339–342. Springer (2012)Google Scholar
  11. 11.
    Beume, N.: S-metric calculation by considering dominated hypervolume as klee’s measure problem. Evolut. Comput. 17(4), 477–492 (2009)CrossRefGoogle Scholar
  12. 12.
    Binois, M., Picheny, V.: GPareto: Gaussian Processes for Pareto Front Estimation and Optimization, 2015. URL http://CRAN.R-project.org/package=GPareto. R package version 1.0.1
  13. 13.
    Box, G.E.P., Cox, D.R.: An analysis of transformations. J. Roy. Stat. Soc. Series B (Methodological) 26(2), 211–252 (1964)MATHGoogle Scholar
  14. 14.
    Cérou, F., Del Moral, P., Furon, T., Guyader, A.: Sequential Monte Carlo for rare event estimation. Stat. Comput. 22(3), 795–808 (2012)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Chafekar, D., Xuan, J., Rasheed, K.: Constrained multi-objective optimization using steady state genetic algorithms. In: Genetic and Evolutionary Computation-GECCO 2003, pp. 813–824. Springer (2003)Google Scholar
  16. 16.
    Chevalier, C., Bect, J., Ginsbourger, D., Vazquez, E., Picheny, V., Richet, Y.: Fast parallel kriging-based stepwise uncertainty reduction with application to the identification of an excursion set. Technometrics 56(4), 455–465 (2014)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Conn, A.R., Gould, N.I.M., Toint, P.: A globally convergent augmented lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of multiobjective probability of improvement and expected improvement criteria for pareto optimization. J. Glob. Optim. 60(3), 575–594 (2014)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Damianou, A., Lawrence, N.: Deep gaussian processes. In: Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, pp. 207–215 (2013)Google Scholar
  20. 20.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. Evolut. Comput., IEEE Trans. 6(2), 182–197 (2002)CrossRefGoogle Scholar
  21. 21.
    Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 68(3), 411–436 (2006)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Douc, R., Cappé, O.: Comparison of resampling schemes for particle filtering. In: Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th International Symposium on, pp. 64–69. IEEE (2005)Google Scholar
  23. 23.
    Emmerich, M.: Single- and Multi-Objective Evolutionary Design Optimization Assisted by Gaussian Random Field Metamodels. PhD thesis, Technical University Dortmund (2005)Google Scholar
  24. 24.
    Emmerich, M., Klinkenberg, J.W.: The Computation of the Expected Improvement in Dominated Hypervolume of Pareto Front Approximations, Technical report. Leiden University (2008)Google Scholar
  25. 25.
    Emmerich, M., Giannakoglou, K.C., Naujoks, B.: Single- and multi-objective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evolut. Comput. 10(4), 421–439 (2006)CrossRefGoogle Scholar
  26. 26.
    Fonseca, C.M., Fleming, P.J.: Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans. Syst., Man Cybern. Part A Syst. Hum. 28(1), 26–37 (1998)CrossRefGoogle Scholar
  27. 27.
    Forrester, A.I.J., Sobester, A., Keane, A.J.: Engineering Design via Surrogate Modelling: a Practical Guide. Wiley, Chichester (2008)CrossRefGoogle Scholar
  28. 28.
    Gelbart, M.A.: Constrained Bayesian Optimization and Applications. PhD thesis, Harvard University, Graduate School of Arts and Sciences (2015)Google Scholar
  29. 29.
    Gelbart, M.A., Snoek, J., Adams, R.P.: Bayesian optimization with unknown constraints. arXiv preprint arXiv:1403.5607 (2014)
  30. 30.
    Ginsbouger, D., Le Riche, R.: Towards Gaussian process-based optimization with finite time horizon. In: Invited talk at the 6th Autumn Symposium of the “Statistical Modelling” Research Training Group, 21 November (2009)Google Scholar
  31. 31.
    Gramacy, R.B., Lee, H.: Optimization under unknown constraints. In: Bayesian Statistics 9. Proceedings of the Ninth Valencia International Meeting, pp. 229–256. Oxford University Press (2011)Google Scholar
  32. 32.
    Gramacy, R.B., Gray, G.A., Le Digabel, S., Lee, H.K.H., Ranjan, P., Wells, G., Wild, S.M.: Modeling an augmented lagrangian for blackbox constrained optimization. Technometrics, arXiv preprint arXiv:1403.4890 (2015)
  33. 33.
    Hernández-Lobato, D., Hernández-Lobato, J.M., Shah, A., Adams, R.P.: Predictive entropy search for multi-objective bayesian optimization. arXiv preprint arXiv:1511.05467 (2015a)
  34. 34.
    Hernández-Lobato, J.M., Gelbart, M.A., Hoffman, M.W., Adams, R.P., Ghahramani, Z.: Predictive entropy search for bayesian optimization with unknown constraints. In: Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37 (2015b)Google Scholar
  35. 35.
    Hernández-Lobato, J.M., Gelbart, M.A., Adams, R.P., Hoffman, M.W., Ghahramani, Z.: A general framework for constrained bayesian optimization using information-based search. arXiv preprint arXiv:1511.09422 (2015)
  36. 36.
    Horn, D., Wagner, T., Biermann, D., Weihs, C., Bischl, B.: Model-based multi-objective optimization: taxonomy, multi-point proposal, toolbox and benchmark. In: Evolutionary Multi-Criterion Optimization, pp. 64–78. Springer (2015)Google Scholar
  37. 37.
    Hupkens, I., Emmerich, M., Deutz, A.: Faster computation of expected hypervolume improvement. arXiv preprint arXiv:1408.7114 (2014)
  38. 38.
    Jeong, S., Obayashi, S.: Efficient global optimization (ego) for multi-objective problem and data mining. In: Evolutionary Computation, 2005. The 2005 IEEE Congress on, vol. 3, pp. 2138–2145 (2005)Google Scholar
  39. 39.
    Jeong, S., Minemura, Y., Obayashi, S.: Optimization of combustion chamber for diesel engine using kriging model. J. Fluid Sci. Technol. 1(2), 138–146 (2006)CrossRefGoogle Scholar
  40. 40.
    Jin, Y.: Surrogate-assisted evolutionary computation: recent advances and future challenges. Swarm Evolut. Comput. 1(2), 61–70 (2011)CrossRefGoogle Scholar
  41. 41.
    Johnson, S.G.: The nlopt nonlinear-optimization package (version 2.3). URL http://ab-initio.mit.edu/nlopt (2012)
  42. 42.
    Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998)MathSciNetCrossRefMATHGoogle Scholar
  43. 43.
    Keane, A.J.: Statistical improvement criteria for use in multiobjective design optimization. AIAA J. 44(4), 879–891 (2006)CrossRefGoogle Scholar
  44. 44.
    Knowles, J.: Parego: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. Evolut. Comput., IEEE Trans. 10(1), 50–66 (2006)CrossRefGoogle Scholar
  45. 45.
    Knowles, J., Hughes, E.J.: Multiobjective optimization on a budget of 250 evaluations. In: Evolutionary Multi-Criterion Optimization, pp. 176–190. Springer (2005)Google Scholar
  46. 46.
    Kushner, H.J.: A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Fluids Eng. 86(1), 97–106 (1964)Google Scholar
  47. 47.
    Li, L.: Sequential Design of Experiments to Estimate a Probability of Failure. PhD thesis, Supélec (2012)Google Scholar
  48. 48.
    Li, L., Bect, J., Vazquez, E.: Bayesian subset simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure. In: Proceedings of PSAM 11 & ESREL 2012, 25–29 June 2012, Helsinki. IAPSAM (2012)Google Scholar
  49. 49.
    Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, New York (2001)MATHGoogle Scholar
  50. 50.
    Loeppky, J.L., Sacks, J., Welch, W.J.: Choosing the sample size of a computer experiment: A practical guide. Technometrics 51(4) (2009)Google Scholar
  51. 51.
    Mockus, J.: On Bayesian methods of optimization. In: Towards Global Optimization, pp. 166–181. North-Holland (1975)Google Scholar
  52. 52.
    Mockus, J.: Bayesian Approach to Global Optimization: Theory and Applications, vol. 37. Kluwer, Dordrecht (1989)MATHGoogle Scholar
  53. 53.
    Mockus, J., Tiesis, V., Žilinskas, A.: The application of Bayesian methods for seeking the extremum. In: Dixon, L .C .W., Szegö, G.P. (eds.) Towards Global Optimization, vol. 2, pp. 117–129. North Holland, New York (1978)Google Scholar
  54. 54.
    Oyama, A., Shimoyama, K., Fujii, K.: New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans. Jpn. Soc. Aeronaut. Space Sci. 50(167), 56–62 (2007)CrossRefGoogle Scholar
  55. 55.
    Parr, J.M., Keane, A.J., Forrester, A.I.J., Holden, C.M.E.: Infill sampling criteria for surrogate-based optimization with constraint handling. Eng. Optim. 44(10), 1147–1166 (2012)CrossRefMATHGoogle Scholar
  56. 56.
    Picheny, V.: A stepwise uncertainty reduction approach to constrained global optimization. In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014, Reykjavik, Iceland, vol. 33, pp. 787–795. JMLR: W&CP (2014a)Google Scholar
  57. 57.
    Picheny, V.: Multiobjective optimization using Gaussian process emulators via stepwise uncertainty reduction. Stat. Comput. (2014b). doi:10.1007/s11222-014-9477-x:1-16
  58. 58.
    Ponweiser, W., Wagner, T., Biermann, D., Vincze, M.: Multiobjective optimization on a limited budget of evaluations using model-assisted \({\cal S}\)-metric selection. In: Parallel Problem Solving from Nature (PPSN X), vol. 5199 of Lecture Notes in Computer Science, pp. 784–794. Springer (2008)Google Scholar
  59. 59.
    Powell, M.J.D.: A direct search optimization method that models the objective and constraint functions by linear interpolation. In: Advances in Optimization and Numerical Analysis, pp. 51–67. Springer (1994)Google Scholar
  60. 60.
    Ray, T., Tai, K., Seow, K.C.: Multiobjective design optimization by an evolutionary algorithm. Eng. Optim. 33(4), 399–424 (2001)CrossRefGoogle Scholar
  61. 61.
    Regis, R.G.: Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Eng. Optim. 46(2), 218–243 (2014)MathSciNetCrossRefGoogle Scholar
  62. 62.
    Robert, C., Casella, G.: Monte Carlo Statistical Methods, 2nd edn. Springer, New York (2004)CrossRefMATHGoogle Scholar
  63. 63.
    Roberts, G.O., Rosenthal, J.S.: Examples of adaptive mcmc. J. Comput. Graph. Stat. 18(2), 349–367 (2009)MathSciNetCrossRefGoogle Scholar
  64. 64.
    Santner, T.J., Williams, B.J., Notz, W.: The design and Analysis of Computer Experiments. Springer, New York (2003)CrossRefMATHGoogle Scholar
  65. 65.
    Sasena, M.J.: Flexibility and Efficiency Enhancements for Constrained Global Design Optimization with Kriging Approximations. PhD thesis, University of Michigan (2002)Google Scholar
  66. 66.
    Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)CrossRefGoogle Scholar
  67. 67.
    Schonlau, M., Welch, W.J., Jones, D.R.: Global versus local search in constrained optimization of computer models. In: New Developments and Applications in Experimental Design: Selected Proceedings of a 1997 Joint AMS-IMS-SIAM Summer Conference, vol. 34 of IMS Lecture Notes-Monographs Series, pp. 11–25. Institute of Mathematical Statistics (1998)Google Scholar
  68. 68.
    Shimoyama, K., Sato, K., Jeong, S., Obayashi, S.: Updating kriging surrogate models based on the hypervolume indicator in multi-objective optimization. J. Mech. Des. 135(9), 094503 (2013)CrossRefGoogle Scholar
  69. 69.
    Snelson, E., Rasmussen, C.E., Ghahramani, Z.: Warped Gaussian processes. Adv. Neural Inf. Process. Syst. 16, 337–344 (2004)Google Scholar
  70. 70.
    Stein, M.L.: Interpolation of Spatial Data: Some Theory for Kriging. Springer, New York (1999)CrossRefMATHGoogle Scholar
  71. 71.
    Svenson, J.D., Santner, T.J.: Multiobjective optimization of expensive black-box functions via expected maximin improvement. Technical report, Tech. rep., 43210, Ohio University, Columbus (2010)Google Scholar
  72. 72.
    Toal, D.J.J., Keane, A.J.: Non-stationary kriging for design optimization. Eng. Optim. 44(6), 741–765 (2012)MathSciNetCrossRefGoogle Scholar
  73. 73.
    Vazquez, E., Bect, J.: A new integral loss function for Bayesian optimization. arXiv preprint arXiv:1408.4622, (2014)
  74. 74.
    Villemonteix, J., Vazquez, E., Walter, E.: An informational approach to the global optimization of expensive-to-evaluate functions. J. Glob. Optim. 44(4), 509–534 (2009)MathSciNetCrossRefMATHGoogle Scholar
  75. 75.
    Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Parallel Problem Solving from Nature, PPSN XI. 11th International Conference, Krakov, Poland, 11–15 September 2010, Proceedings, Part I, vol. 6238 of Lecture Notes in Computer Science, pp. 718–727. Springer (2010)Google Scholar
  76. 76.
    Williams, B.J., Santner, T.J., Notz, W.I., Lehman, J.S.: Sequential design of computer experiments for constrained optimization. In: Kneib, T., Tutz, G. (eds.) Statistical Modelling and Regression Structures, pp. 449–472. Physica-Verlag, HD (2010)CrossRefGoogle Scholar
  77. 77.
    Williams, C.K.I., Rasmussen, C.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)MATHGoogle Scholar
  78. 78.
    Zhang, Q., Liu, W., Tsang, E., Virginas, B.: Expensive multiobjective optimization by MOEA/D with gaussian process model. Evolut. Comput., IEEE Trans. 14(3), 456–474 (2010)CrossRefGoogle Scholar
  79. 79.
    Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In K. C. Giannakoglou et al., (ed.) Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001), pp. 95–100. International Center for Numerical Methods in Engineering (CIMNE) 2002Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Institut de Recherche Technologique SystemXPalaiseauFrance
  2. 2.Laboratoire des Signaux et Systémes (L2S), CentraleSupélec, CNRS, Univ. Paris-Sud, Université Paris-SaclayGif-sur-YvetteFrance

Personalised recommendations