Advertisement

Memetic Computing

, Volume 9, Issue 2, pp 153–180 | Cite as

Transfer weight functions for injecting problem information in the multi-objective CMA-ES

  • Olacir R. CastroJr.
  • Aurora Pozo
  • Jose A. Lozano
  • Roberto Santana
Regular Research Paper
  • 214 Downloads

Abstract

The covariance matrix adaptation evolution strategy (CMA-ES) is one of the state-of-the-art evolutionary algorithms for optimization problems with continuous representation. It has been extensively applied to single-objective optimization problems, and different variants of CMA-ES have also been proposed for multi-objective optimization problems (MOPs). When applied to MOPs, the traditional steps of CMA-ES have to be modified to accommodate for multiple objectives. This fact is particularly evident when the number of objectives is higher than 3 and, with a high probability, all the solutions produced become non-dominated. An open question is to what extent information about the objective values of the non-dominated solutions can be injected in the CMA-ES model for a more effective search. In this paper, we investigate this general question using several metrics that describe the quality of the solutions already evaluated, different transfer weight functions, and a set of difficult benchmark instances including many-objective problems. We introduce a number of new strategies that modify how the probabilistic model is learned in CMA-ES. By conducting an exhaustive empirical analysis on two difficult benchmarks of many-objective functions we show that the proposed strategies to infuse information about the quality indicators into the learned models can achieve consistent improvements in the quality of the Pareto fronts obtained and enhance the convergence rate of the algorithm. Moreover, we conducted a comparison with a state-of-the-art algorithm from the literature, and achieved competitive results in problems with irregular Pareto fronts.

Keywords

Many-objective Covariance matrix adaptation Optimization Probabilistic modeling Estimation of distribution algorithm 

References

  1. 1.
    Bader J, Deb K, Zitzler E (2010) Faster hypervolume-based search using Monte Carlo sampling. Multiple criteria decision making for sustainable energy and transportation systems, vol 634. Lecture notes in economics and mathematical systems. Springer, Berlin, pp 313–326Google Scholar
  2. 2.
    Beyer HG, Schwefel HP (2002) Evolution strategies—a comprehensive introduction. Nat Comput 1(1):3–52MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Bosman PA, Thierens D (2002) Multi-objective optimization with diversity preserving mixture-based iterated density estimation evolutionary algorithms. Int J Approx Reason 31(3):259–289MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Bringmann K, Friedrich T, Igel C, Voß T (2013) Speeding up many-objective optimization by Monte Carlo approximations. Artif Intell 204:22–29. doi: 10.1016/j.artint.2013.08.001. http://www.sciencedirect.com/science/article/pii/S0004370213000738
  5. 5.
    Brockhoff D, Wagner T, Trautmann H (2012) On the properties of the R2 indicator. In: Proceedings of the 14th annual conference on genetic and evolutionary computation, ACM, New York, NY, USA, GECCO ’12, pp 465–472. doi: 10.1145/2330163.2330230
  6. 6.
    Coello CAC, Lamont GB, Veldhuizen DAV (2006) Evolutionary algorithms for solving multi-objective problems (genetic and evolutionary computation). Springer-Verlag New York Inc, SecaucusMATHGoogle Scholar
  7. 7.
    Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601. doi: 10.1109/TEVC.2013.2281535 CrossRefGoogle Scholar
  8. 8.
    Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimisation: NSGA-II. Proceedings of the 6th international conference on parallel problem solving from nature. Springer-Verlag, London, UK, PPSN VI, pp 849–858Google Scholar
  9. 9.
    Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. Evolutionary multiobjective optimization. Advanced information and knowledge processing. Springer, London, pp 105–145Google Scholar
  10. 10.
    Hansen N, Ostermeier A (1996) Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation. In: IEEE international conference on evolutionary computation, pp 312–317. doi: 10.1109/ICEC.1996.542381
  11. 11.
    Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506. doi: 10.1109/TEVC.2005.861417 CrossRefMATHGoogle Scholar
  12. 12.
    Igel C, Hansen N, Roth S (2007a) Covariance matrix adaptation for multi-objective optimization. Evol Comput 15(1):1–28CrossRefGoogle Scholar
  13. 13.
    Igel C, Suttorp T, Hansen N (2007b) Steady-state selection and efficient covariance matrix update in the multi-objective CMA-ES. In: Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T (eds) Evolutionary multi-criterion optimization. Lecture notes in computer science, vol 4403. Springer, Berlin Heidelberg, pp 171–185. doi: 10.1007/978-3-540-70928-2_16
  14. 14.
    Karshenas H, Santana R, Bielza C, Larrañaga P (2014) Multi-objective optimization based on joint probabilistic modeling of objectives and variables. IEEE Trans Evol Comput 18(4):519–542CrossRefGoogle Scholar
  15. 15.
    Knowles J, Corne D (1999) The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation. In: Proceedings of the 1999 Congress on evolutionary computation, 1999. CEC 99, vol 1, p 105. doi: 10.1109/CEC.1999.781913
  16. 16.
    Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621CrossRefMATHGoogle Scholar
  17. 17.
    Larrañaga P, Lozano JA (eds) (2002) Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers, BostonMATHGoogle Scholar
  18. 18.
    Li K, Deb K, Zhang Q, Kwong S (2014) Combining dominance and decomposition in evolutionary many-objective optimization. IEEE Trans Evol Comput 1–1. doi: 10.1109/tevc.2014.2373386
  19. 19.
    Nemenyi P (1963) Distribution-free multiple comparisons. PhD thesis, Princeton University. https://books.google.com.sg/books?id=nhDMtgAACAAJ
  20. 20.
    Rostami S, Shenfield A (2012) CMA-PAES: Pareto archived evolution strategy using covariance matrix adaptation for multi-objective optimisation. In: 12th UK Workshop on computational intelligence (UKCI), pp 1–8. doi: 10.1109/UKCI.2012.6335782
  21. 21.
    Santana R, Larrañaga P, Lozano JA (2009) Research topics on discrete estimation of distribution algorithms. Memet Comput 1(1):35–54. doi: 10.1007/s12293-008-0002-7 CrossRefGoogle Scholar
  22. 22.
    Santana R, Armañanzas R, Bielza C, Larrañaga P (2013) Network measures for information extraction in evolutionary algorithms. Int J Comput Intell Syst 6(6):1163–1188CrossRefGoogle Scholar
  23. 23.
    Schutze O, Esquivel X, Lara A, Coello CAC (2012) Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Trans Evol Comput 16(4):504–522. doi: 10.1109/TEVC.2011.2161872
  24. 24.
    Shakya S, McCall J (2007) Optimization by estimation of distribution with DEUM framework based on Markov random fields. Int J Autom Comput 4(3):262–272CrossRefGoogle Scholar
  25. 25.
    Valdez-Peña IS, Hernández-Aguirre A, Botello-Rionda S (2009) Approximating the search distribution to the selection distribution in EDAs. In: Proceedings of the genetic and evolutionary computation conference GECCO-2009. ACM, New York, NY, USA, pp 461–468Google Scholar
  26. 26.
    Voß T, Beume N, Rudolph G, Igel C (2008) Scalarization versus indicator-based selection in multi-objective CMA evolution strategies. In: Proceedings of the 2008 congress on evolutionary computation CEC-2008, pp 3036–3043. doi: 10.1109/CEC.2008.4631208
  27. 27.
    Voß T, Hansen N, Igel C (2009) Recombination for learning strategy parameters in the MO-CMA-ES. In: Ehrgott M, Fonseca C, Gandibleux X, Hao JK, Sevaux M (eds) Evolutionary multi-criterion optimization. Lecture notes in computer science, vol 5467. Springer, Berlin, pp 155–168. doi: 10.1007/978-3-642-01020-0_16
  28. 28.
    Voß T, Hansen N, Igel C (2010) Improved step size adaptation for the MO-CMA-ES. In: Proceedings of the 12th annual conference on genetic and evolutionary computation, ACM, New York, NY, USA, GECCO ’10, pp 487–494. doi: 10.1145/1830483.1830573
  29. 29.
    Zapotecas-Martínez S, Derbel B, Liefooghe A, Brockhoff D, Aguirre HE, Tanaka K (2015) Injecting CMA-ES into MOEA/D. In: Proceedings of the 2015 genetic and evolutionary computation conference, ACM, New York, NY, USA, GECCO ’15, pp 783–790. doi: 10.1145/2739480.2754754
  30. 30.
    Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731CrossRefGoogle Scholar
  31. 31.
    Zhang Q, Zhou A, Jin Y (2008) RM-MEDA: a regularity model based multiobjective estimation of distribution algorithm. IEEE Trans Evol Comput 12(1):41–63CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Computer Science DepartmentFederal University of ParanáCuritibaBrazil
  2. 2.Intelligent Systems Group, Department of Computer Science and Artificial IntelligenceUniversity of the Basque Country (UPV/EHU)San Sebastián, DonostiaSpain

Personalised recommendations