Skip to main content
Log in

Transfer weight functions for injecting problem information in the multi-objective CMA-ES

  • Regular Research Paper
  • Published:
Memetic Computing Aims and scope Submit manuscript

Abstract

The covariance matrix adaptation evolution strategy (CMA-ES) is one of the state-of-the-art evolutionary algorithms for optimization problems with continuous representation. It has been extensively applied to single-objective optimization problems, and different variants of CMA-ES have also been proposed for multi-objective optimization problems (MOPs). When applied to MOPs, the traditional steps of CMA-ES have to be modified to accommodate for multiple objectives. This fact is particularly evident when the number of objectives is higher than 3 and, with a high probability, all the solutions produced become non-dominated. An open question is to what extent information about the objective values of the non-dominated solutions can be injected in the CMA-ES model for a more effective search. In this paper, we investigate this general question using several metrics that describe the quality of the solutions already evaluated, different transfer weight functions, and a set of difficult benchmark instances including many-objective problems. We introduce a number of new strategies that modify how the probabilistic model is learned in CMA-ES. By conducting an exhaustive empirical analysis on two difficult benchmarks of many-objective functions we show that the proposed strategies to infuse information about the quality indicators into the learned models can achieve consistent improvements in the quality of the Pareto fronts obtained and enhance the convergence rate of the algorithm. Moreover, we conducted a comparison with a state-of-the-art algorithm from the literature, and achieved competitive results in problems with irregular Pareto fronts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. A newly generated solution is considered successful if the offspring is better than the parent according to the dominance ranks or a secondary measure, in the original paper they used both crowding distance and contributing hypervolume.

  2. In MOEA/D and MOEA/D-CMA a subproblem is considered a neighbor of itself, hence when updating the solutions of the neighbors it updates its own solution.

  3. An external archive or repository is used to store a predefined number of non-dominated solutions, when the archive is full and a new non-dominated solution is found, it is temporarily added and the solution that has the smallest crowding distance is removed.

  4. When a Pareto set approximation dominates another, the indicator value of the former will be greater than the latter.

References

  1. Bader J, Deb K, Zitzler E (2010) Faster hypervolume-based search using Monte Carlo sampling. Multiple criteria decision making for sustainable energy and transportation systems, vol 634. Lecture notes in economics and mathematical systems. Springer, Berlin, pp 313–326

  2. Beyer HG, Schwefel HP (2002) Evolution strategies—a comprehensive introduction. Nat Comput 1(1):3–52

    Article  MathSciNet  MATH  Google Scholar 

  3. Bosman PA, Thierens D (2002) Multi-objective optimization with diversity preserving mixture-based iterated density estimation evolutionary algorithms. Int J Approx Reason 31(3):259–289

    Article  MathSciNet  MATH  Google Scholar 

  4. Bringmann K, Friedrich T, Igel C, Voß T (2013) Speeding up many-objective optimization by Monte Carlo approximations. Artif Intell 204:22–29. doi:10.1016/j.artint.2013.08.001. http://www.sciencedirect.com/science/article/pii/S0004370213000738

  5. Brockhoff D, Wagner T, Trautmann H (2012) On the properties of the R2 indicator. In: Proceedings of the 14th annual conference on genetic and evolutionary computation, ACM, New York, NY, USA, GECCO ’12, pp 465–472. doi:10.1145/2330163.2330230

  6. Coello CAC, Lamont GB, Veldhuizen DAV (2006) Evolutionary algorithms for solving multi-objective problems (genetic and evolutionary computation). Springer-Verlag New York Inc, Secaucus

    MATH  Google Scholar 

  7. Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601. doi:10.1109/TEVC.2013.2281535

    Article  Google Scholar 

  8. Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimisation: NSGA-II. Proceedings of the 6th international conference on parallel problem solving from nature. Springer-Verlag, London, UK, PPSN VI, pp 849–858

  9. Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. Evolutionary multiobjective optimization. Advanced information and knowledge processing. Springer, London, pp 105–145

  10. Hansen N, Ostermeier A (1996) Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation. In: IEEE international conference on evolutionary computation, pp 312–317. doi:10.1109/ICEC.1996.542381

  11. Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506. doi:10.1109/TEVC.2005.861417

    Article  MATH  Google Scholar 

  12. Igel C, Hansen N, Roth S (2007a) Covariance matrix adaptation for multi-objective optimization. Evol Comput 15(1):1–28

    Article  Google Scholar 

  13. Igel C, Suttorp T, Hansen N (2007b) Steady-state selection and efficient covariance matrix update in the multi-objective CMA-ES. In: Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T (eds) Evolutionary multi-criterion optimization. Lecture notes in computer science, vol 4403. Springer, Berlin Heidelberg, pp 171–185. doi:10.1007/978-3-540-70928-2_16

  14. Karshenas H, Santana R, Bielza C, Larrañaga P (2014) Multi-objective optimization based on joint probabilistic modeling of objectives and variables. IEEE Trans Evol Comput 18(4):519–542

    Article  Google Scholar 

  15. Knowles J, Corne D (1999) The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation. In: Proceedings of the 1999 Congress on evolutionary computation, 1999. CEC 99, vol 1, p 105. doi:10.1109/CEC.1999.781913

  16. Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621

    Article  MATH  Google Scholar 

  17. Larrañaga P, Lozano JA (eds) (2002) Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers, Boston

    MATH  Google Scholar 

  18. Li K, Deb K, Zhang Q, Kwong S (2014) Combining dominance and decomposition in evolutionary many-objective optimization. IEEE Trans Evol Comput 1–1. doi:10.1109/tevc.2014.2373386

  19. Nemenyi P (1963) Distribution-free multiple comparisons. PhD thesis, Princeton University. https://books.google.com.sg/books?id=nhDMtgAACAAJ

  20. Rostami S, Shenfield A (2012) CMA-PAES: Pareto archived evolution strategy using covariance matrix adaptation for multi-objective optimisation. In: 12th UK Workshop on computational intelligence (UKCI), pp 1–8. doi:10.1109/UKCI.2012.6335782

  21. Santana R, Larrañaga P, Lozano JA (2009) Research topics on discrete estimation of distribution algorithms. Memet Comput 1(1):35–54. doi:10.1007/s12293-008-0002-7

    Article  Google Scholar 

  22. Santana R, Armañanzas R, Bielza C, Larrañaga P (2013) Network measures for information extraction in evolutionary algorithms. Int J Comput Intell Syst 6(6):1163–1188

    Article  Google Scholar 

  23. Schutze O, Esquivel X, Lara A, Coello CAC (2012) Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Trans Evol Comput 16(4):504–522. doi:10.1109/TEVC.2011.2161872

  24. Shakya S, McCall J (2007) Optimization by estimation of distribution with DEUM framework based on Markov random fields. Int J Autom Comput 4(3):262–272

    Article  Google Scholar 

  25. Valdez-Peña IS, Hernández-Aguirre A, Botello-Rionda S (2009) Approximating the search distribution to the selection distribution in EDAs. In: Proceedings of the genetic and evolutionary computation conference GECCO-2009. ACM, New York, NY, USA, pp 461–468

  26. Voß T, Beume N, Rudolph G, Igel C (2008) Scalarization versus indicator-based selection in multi-objective CMA evolution strategies. In: Proceedings of the 2008 congress on evolutionary computation CEC-2008, pp 3036–3043. doi:10.1109/CEC.2008.4631208

  27. Voß T, Hansen N, Igel C (2009) Recombination for learning strategy parameters in the MO-CMA-ES. In: Ehrgott M, Fonseca C, Gandibleux X, Hao JK, Sevaux M (eds) Evolutionary multi-criterion optimization. Lecture notes in computer science, vol 5467. Springer, Berlin, pp 155–168. doi:10.1007/978-3-642-01020-0_16

  28. Voß T, Hansen N, Igel C (2010) Improved step size adaptation for the MO-CMA-ES. In: Proceedings of the 12th annual conference on genetic and evolutionary computation, ACM, New York, NY, USA, GECCO ’10, pp 487–494. doi:10.1145/1830483.1830573

  29. Zapotecas-Martínez S, Derbel B, Liefooghe A, Brockhoff D, Aguirre HE, Tanaka K (2015) Injecting CMA-ES into MOEA/D. In: Proceedings of the 2015 genetic and evolutionary computation conference, ACM, New York, NY, USA, GECCO ’15, pp 783–790. doi:10.1145/2739480.2754754

  30. Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731

    Article  Google Scholar 

  31. Zhang Q, Zhou A, Jin Y (2008) RM-MEDA: a regularity model based multiobjective estimation of distribution algorithm. IEEE Trans Evol Comput 12(1):41–63

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Olacir R. Castro Jr..

Additional information

This work was supported by CNPq, National Council for Scientific and Technological Development—Brazil (Productivity Grant Nos. 305986/2012-0 and Program Science Without Borders Nos. 200040/2015-4) and by IT-609-13 program (Basque Government) and TIN2013-41272P (Spanish Ministry of Science and Innovation).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

R. Castro, O., Pozo, A., Lozano, J.A. et al. Transfer weight functions for injecting problem information in the multi-objective CMA-ES. Memetic Comp. 9, 153–180 (2017). https://doi.org/10.1007/s12293-016-0202-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12293-016-0202-5

Keywords

Navigation