Advertisement

Multi-objective optimization with an adaptive resonance theory-based estimation of distribution algorithm

  • Luis Martí
  • Jesús García
  • Antonio Berlanga
  • José M. Molina
Article

Abstract

The introduction of learning to the search mechanisms of optimization algorithms has been nominated as one of the viable approaches when dealing with complex optimization problems, in particular with multi-objective ones. One of the forms of carrying out this hybridization process is by using multi-objective optimization estimation of distribution algorithms (MOEDAs). However, it has been pointed out that current MOEDAs have an intrinsic shortcoming in their model-building algorithms that hamper their performance. In this work, we put forward the argument that error-based learning, the class of learning most commonly used in MOEDAs is responsible for current MOEDA underachievement. We present adaptive resonance theory (ART) as a suitable learning paradigm alternative and present a novel algorithm called multi-objective ART-based EDA (MARTEDA) that uses a Gaussian ART neural network for model-building and a hypervolume-based selector as described for the HypE algorithm. In order to assert the improvement obtained by combining two cutting-edge approaches to optimization an extensive set of experiments are carried out. These experiments also test the scalability of MARTEDA as the number of objective functions increases.

Keywords

Multi-objective optimization Estimation of distribution algorithms Adaptive resonance theory 

Mathematics Subject Classifications (2010)

65K10 68T20 68T05 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ahn, C.W.: Advances in Evolutionary Algorithms. Theory, Design and Practice. Springer (2006). ISBN 3-540-31758-9Google Scholar
  2. 2.
    Bäck, T.: Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York (1996)zbMATHGoogle Scholar
  3. 3.
    Bader, J.: Hypervolume-based search for multiobjective optimization: theory and methods. Ph.D. thesis, ETH Zurich, Switzerland (2010)Google Scholar
  4. 4.
    Bader, J., Deb, K., Zitzler, E.: Faster hypervolume-based search using Monte Carlo sampling. In: Beckmann, M., Künzi, H.P., Fandel, G., Trockel, W., Basile, A., Drexl, A., Dawid, H., Inderfurth, K., Kürsten, W., Schittko, U., Ehrgott, M., Naujoks, B., Stewart, T.J., Wallenius, J. (eds.) Multiple Criteria Decision Making for Sustainable Energy and Transportation Systems. Lecture Notes in Economics and Mathematical Systems, vol. 634, pp. 313–326. Springer, Berlin (2010). doi: 10.1007/978-3-642-04045-0_27 CrossRefGoogle Scholar
  5. 5.
    Bader, J., Zitzler, E.: HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 19(1), 45–76 (2011). doi: 10.1162/EVCO_a_00009, URL: http://www.mitpressjournals.org/doi/abs/10.1162/EVCO_a_00009 CrossRefGoogle Scholar
  6. 6.
    Bellman, R.E.: Adaptive Control Processes. Princeton University Press, Princeton (1961)zbMATHGoogle Scholar
  7. 7.
    Beume, N.: S–metric calculation by considering dominated hypervolume as Klee’s measure problem. Evol. Comput. 17(4), 477–492 (2009). doi: 10.1162/evco.2009.17.4.17402, URL: http://www.mitpressjournals.org/doi/abs/10.1162/evco.2009.17.4.17402, PMID: 19916778CrossRefGoogle Scholar
  8. 8.
    Beume, N., Naujoks, B., Emmerich, M.: SMS–EMOA: multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 181(3), 1653–1669 (2007). URL: http://ideas.repec.org/a/eee/ejores/v181y2007i3p1653-1669.html CrossRefzbMATHGoogle Scholar
  9. 9.
    Beume, N., Rudolph, G.: Faster S–metric calculation by considering dominated hypervolume as Klee’s measure problem. In: Kovalerchuk, B. (ed.) Proceedings of the Second IASTED International Conference on Computational Intelligence, pp. 233–238. IASTED/ACTA Press (2006)Google Scholar
  10. 10.
    Bosman, P.A.N.: Design and application of iterated density-estimation evolutionary algorithms. Ph.D. thesis, Institute of Information and Computing Sciences, Universiteit Utrecht, Utrecht, The Netherlands (2003)Google Scholar
  11. 11.
    Bosman, P.A.N., Thierens, D.: The naïve MIDEA: A baseline multi–objective EA. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) Evolutionary Multi-Criterion Optimization. Third International Conference, EMO 2005. Springer. Lecture Notes in Computer Science, vol. 3410, pp. 428–442. Guanajuato, México (2005)Google Scholar
  12. 12.
    Box, G.E.P., Muller, M.E.: A note on the generation of random normal deviates. Ann. Math. Stat. 29, 610–611 (1958)CrossRefzbMATHGoogle Scholar
  13. 13.
    Branke, J., Lode, C., Shapiro, J.L.: Addressing sampling errors and diversity loss in umda. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation—GECCO ’07, pp. 508–515. ACM Press (2007). doi: 10.1145/1276958.1277068, URL: http://portal.acm.org/citation.cfm?doid=1276958.1277068
  14. 14.
    Branke, J., Miettinen, K., Deb, K., Słowiǹski, R. (eds.): Multiobjective Optimization. Lecture Notes in Computer Science, vol. 5252. Springer, Berlin (2008)zbMATHGoogle Scholar
  15. 15.
    Bringmann, K., Friedrich, T.: Approximating the volume of unions and intersections of high-dimensional geometric objects. Comput. Geom. 43(6–7), 601–610 (2010). doi: 10.1016/j.comgeo.2010.03.004, URL: http://linkinghub.elsevier.com/retrieve/pii/S0925772110000167 CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Brockhoff, D., Saxena, D.K., Deb, K., Zitzler, E.: On handling a large number of objectives a posteriori and during optimization. In: Knowles, J., Corne, D., Deb, K. (eds.) Multi-Objective Problem Solving from Nature: From Concepts to Applications, Natural Computing Series, pp. 377–403. Springer (2008). doi: 10.1007/978-3-540-72964-8
  17. 17.
    Brockhoff, D., Zitzler, E.: Dimensionality reduction in multiobjective optimization: the minimum objective subset problem. In: Waldmann, K.H., Stocker, U.M. (eds.) Operations Research Proceedings 2006, pp. 423–429. Springer (2007)Google Scholar
  18. 18.
    Carpenter, G., Grossberg, S.: The ART of adaptive pattern recognition by a self-organizing neural network. Computer 21(3), 77–88 (1988). doi: 10.1109/2.33 CrossRefGoogle Scholar
  19. 19.
    Chambers, J., Cleveland, W., Kleiner, B., Tukey, P.: Graphical Methods for Data Analysis. Wadsworth, Belmont (1983)zbMATHGoogle Scholar
  20. 20.
    Coello Coello, C.A., Lamont, G.B., Van Veldhuizen, D.A.: Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd edn. Genetic and Evolutionary Computation. Springer, New York (2007). URL: http://www.springer.com/west/home/computer/foundations?SGWID=4-156-22-173660344-0 Google Scholar
  21. 21.
    Corne, D.W.: Single objective = past, multiobjective = present, ??? = future. In: Michalewicz, Z. (ed.) 2008 IEEE Conference on Evolutionary Computation (CEC), Part of 2008 IEEE World Congress on Computational Intelligence (WCCI 2008). IEEE Press, Piscataway (2008). doi: 10.1109/CEC.2008.4631179 Google Scholar
  22. 22.
    Deb, K.: Multi-Objective Optimization using Evolutionary Algorithms. Wiley, Chichester (2001). ISBN 0-471-87339-XGoogle Scholar
  23. 23.
    Deb, K., Saxena, D.K.: Searching for Pareto–optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems. In: 2006 IEEE Conference on Evolutionary Computation (CEC’2006), pp. 3352–3360. IEEE Press, Piscataway (2006)Google Scholar
  24. 24.
    Deolalikar, V.: P≠NP. Tech. rep., Hewlett Packard Research Labs, Palo Alto, CA, USA (2010). URL: http://www.scribd.com/doc/35539144/pnp12pt
  25. 25.
    Fleischer, M.: The measure of Pareto optima. Applications to multi-objective metaheuristics. In: Fonseca, C.M., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) Evolutionary Multi-Criterion Optimization. Second International Conference, EMO 2003. Springer. Lecture Notes in Computer Science, vol. 2632, pp. 519–533. Faro, Portugal (2003)Google Scholar
  26. 26.
    Fonseca, C.M., Paquete, L., López-Ibánez, M.: An improved dimension–sweep algorithm for the hypervolume indicator. In: 2006 IEEE Congress on Evolutionary Computation (CEC’2006), pp. 1157–1163 (2006)Google Scholar
  27. 27.
    Grossberg, S.: How does the brain build a cognitive code? Psychol. Rev. 87, 1–51 (1980)CrossRefGoogle Scholar
  28. 28.
    Grossberg, S.: Studies of Mind and Brain: Neural Principles of Learning, Perception, Development, Cognition, and Motor Control. Reidel, Boston (1982)CrossRefzbMATHGoogle Scholar
  29. 29.
    Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10(5), 477–506 (2006)CrossRefGoogle Scholar
  30. 30.
    Khare, V., Yao, X., Deb, K.: Performance scaling of multi-objective evolutionary algorithms. In: Fonseca, C.M., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) Evolutionary Multi-Criterion Optimization. Second International Conference, EMO 2003. Springer. Lecture Notes in Computer Science, vol. 2632, pp. 376–390. Faro, Portugal (2003)Google Scholar
  31. 31.
    Knowles, J., Thiele, L., Zitzler, E.: A tutorial on the performance assessment of stochastic multiobjective optimizers. TIK Report 214, Computer Engineering and Networks Laboratory (TIK), ETH Zurich (2006)Google Scholar
  32. 32.
    Knowles, J.D.: Local-search and hybrid evolutionary algorithms for Pareto optimization. Ph.D. thesis, The University of Reading, Department of Computer Science, Reading, UK (2002)Google Scholar
  33. 33.
    Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.): Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms. Springer (2006)Google Scholar
  34. 34.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18, 50–60 (1947)CrossRefzbMATHMathSciNetGoogle Scholar
  35. 35.
    Martí, L., García, J., Berlanga, A., Coello Coello, C.A., Molina, J.M.: On current model-building methods for multi-objective estimation of distribution algorithms: shortcommings and directions for improvement. Tech. Rep. GIAA2010E001, Grupo de Inteligencia Artificial Aplicada, Universidad Carlos III de Madrid, Colmenarejo, Spain (2010). URL: http://www.giaa.inf.uc3m.es/miembros/lmarti/model-building
  36. 36.
    Martí, L., García, J., Berlanga, A., Coello Coello, C.A., Molina, J.M.: MB-GNG: Addressing drawbacks in multi-objective optimization estimation of distribution algorithms. Oper. Res. Lett. 39(2), 150–154 (2011). doi: 10.1016/j.orl.2011.01.002 CrossRefzbMATHMathSciNetGoogle Scholar
  37. 37.
    Martí, L., García, J., Berlanga, A., Molina, J.M.: Introducing MONEDA: scalable multiobjective optimization with a neural estimation of distribution algorithm. In: Keizer, M., Antoniol, G., Congdon, C., Deb, K., Doerr, B., Hansen, N., Holmes, J., Hornby, G., Howard, D., Kennedy, J., Kumar, S., Lobo, F., Miller, J., Moore, J., Neumann, F., Pelikan, M., Pollack, J., Sastry, K., Stanley, K., Stoica, A., Talbi, E.G., Wegener, I. (eds.) GECCO ’08: 10th Annual Conference on Genetic and Evolutionary Computation, pp. 689–696. ACM Press, New York (2008). doi: 10.1145/1389095.1389230. EMO Track “Best Paper” NomineeCrossRefGoogle Scholar
  38. 38.
    Michalski, R.S.: Learnable evolution model: evolutionary processes guided by machine learning. Mach. Learn. 38, 9–40 (2000)CrossRefzbMATHGoogle Scholar
  39. 39.
    Papadimitriou, C.M.: Computational Complexity. Addison-Wesley, Reading (1994)zbMATHGoogle Scholar
  40. 40.
    Pelikan, M., Goldberg, D.E., Lobo, F.: A Survey of Optimization by Building and Using Probabilistic Models. IlliGAL Report No. 99018, University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL (1999)Google Scholar
  41. 41.
    Pelikan, M., Sastry, K., Goldberg, D.E.: Multiobjective estimation of distribution algorithms. In: Pelikan, M., Sastry, K., Cantú-Paz, E. (eds.) Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications, Studies in Computational Intelligence, pp. 223–248. Springer (2006)Google Scholar
  42. 42.
    Pena, J.M., Robles, V., Larrañaga, P., Herves, V., Rosales, F., Pérez, M.S.: GA-EDA: hybrid evolutionary algorithm using genetic and estimation of distribution algorithms. In: Innovations in Applied Artificial Intelligence, pp. 361–371. Springer, Heidelberg (2004). URL: http://www.springerlink.com/index/237R7QCABF08NT98.pdf CrossRefGoogle Scholar
  43. 43.
    Praditwong, K., Yao, X.: How well do multi-objective evolutionary algorithms scale to large problems. In: 2007 IEEE Congress on Evolutionary Computation (CEC 2007), pp. 3959–3966. IEEE Press, Piscataway (2007). doi: 10.1109/CEC.2007.4424987 CrossRefGoogle Scholar
  44. 44.
    Purshouse, R.C., Fleming, P.J.: On the evolutionary optimization of many conflicting objectives. IEEE Trans. Evol. Comput. 11(6), 770–784 (2007). doi: 10.1109/TEVC.2007.910138 CrossRefGoogle Scholar
  45. 45.
    Sarle, W.S.: Why statisticians should not FART. Tech. rep., SAS Institute, Cary, NC (1995)Google Scholar
  46. 46.
    Sheri, G., Corne, D.W.: The simplest evolution/learning hybrid: LEM with KNN. In: IEEE World Congress on Computational Intelligence, pp. 3244–3251. IEEE Press, Hong Kong (2008)Google Scholar
  47. 47.
    Sheri, G., Corne, D.W.: Learning-assisted evolutionary search for scalable function optimization: LEM(ID3). In: IEEE World Congress on Computational Intelligence. IEEE Press, Barcelona (2010)Google Scholar
  48. 48.
    Stewart, T.J., Bandte, O., Braun, H., Chakraborti, N., Ehrgott, M., Göbelt, M., Jin, Y., Nakayama, H., Poles, S., Di Stefano, D.: Real–world applications of multiobjective optimization. In: Branke, J., Miettinen, K., Deb, K., Słowiǹski, R. (eds.) Multiobjective Optimization. Lecture Notes in Computer Science, vol. 5252, pp. 285–327. Springer, Berlin (2008)CrossRefGoogle Scholar
  49. 49.
    Wagner, T., Beume, N., Naujoks, B.: Pareto-, aggregation-, and indicator-based methods in many-objective optimization. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) 4th International Conference on Evolutionary Multi-Criterion Optimization (EMO 2007). Lecture Notes in Computer Science, vol. 4403, pp. 742–756. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-70928-2, URL: http://www.springerlink.com/index/10.1007/978-3-540-70928-2 CrossRefGoogle Scholar
  50. 50.
    Wagner, T., Beume, N., Naujoks, B.: Pareto-, aggregation-, and indicator-based methods in many-objective optimization. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) Evolutionary Multi-Criterion Optimization (EMO 2007). Lecture Notes in Computer Science, vol. 4403, pp. 742–756. Springer, Berlin (2007). doi: 10.1007/978-3-540-70928-2_56 CrossRefGoogle Scholar
  51. 51.
    While, L., Hingston, P., Barone, L., Huband, S.: A faster algorithm for calculating hypervolume. IEEE Trans. Evol. Comput. 10(1), 29–38 (2006). doi: 10.1109/TEVC.2005.851275 CrossRefGoogle Scholar
  52. 52.
    Williamson, J.R.: Gaussian ARTMAP: a neural network for fast incremental learning of noisy multidimensional maps. Neural Netw. 9, 881–897 (1996)CrossRefGoogle Scholar
  53. 53.
    Williamson, J.R.: A constructive, incremental-learning network for mixture modeling and classification. Neural Comput. 9, 1517–1543 (1997)CrossRefGoogle Scholar
  54. 54.
    Yuan, B., Gallagher, M.: On the importance of diversity maintenance in estimation of distribution algorithms. In: GECCO ’05: Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, pp. 719–726. ACM Press, New York (2005). doi: 10.1145/1068009.1068129 CrossRefGoogle Scholar
  55. 55.
    Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007). doi: 10.1109/TEVC.2007.892759 CrossRefGoogle Scholar
  56. 56.
    Zhang, Q., Sun, J., Tsang, E.: An evolutionary algorithm with guided mutation for the maximum clique problem. IEEE Trans. Evol. Comput. 9(2), 192–200 (2005). doi: 10.1109/TEVC.2004.840835, URL: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1413259 CrossRefGoogle Scholar
  57. 57.
    Zhang, Q., Zhou, A., Jin, Y.: RM–MEDA: a regularity model–based multiobjective estimation of distribution algorithm. IEEE Trans. Evol. Comput. 12(1), 41–63 (2008). doi: 10.1109/TEVC.2007.894202 CrossRefGoogle Scholar
  58. 58.
    Zhang, Q., Zhou, A., Zhao, S., Suganthan, P., Liu, W., Tiwari, S.: Multiobjective optimization test instances for the CEC 2009 special session and competition. Tech. rep., University of Essex, Colchester, UK and Nanyang Technological University, Singapore (2009)Google Scholar
  59. 59.
    Zitzler, E., Brockhoff, D., Thiele, L.: The hypervolume indicator revisited: on the design of pareto-compliant indicators via weighted integration. In: Obayashi, S., et al. (eds.) Conference on Evolutionary Multi–Criterion Optimization (EMO 2007). LNCS, vol. 4403, pp. 862–876. Springer, Berlin (2007)Google Scholar
  60. 60.
    Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms on test functions of different difficulty. In: Wu, A.S. (ed.) Proceedings of the 1999 Genetic and Evolutionary Computation Conference. Workshop Program, pp. 121–122. Orlando, Florida (1999)Google Scholar
  61. 61.
    Zitzler, E., Laumanns, M., Thiele, L., Fonseca, C.M., Grunert da Fonseca, V.: Why quality assessment of multiobjective optimizers is difficult. In: Langdon, W.B., Cantú-Paz, E., Mathias, K., Roy, R., Davis, D., Poli, R., Balakrishnan, K., Honavar, V., Rudolph, G., Wegener, J., Bull, L., Potter, M., Schultz, A., Miller, J., Burke, E., Jonoska, N. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’2002), pp. 666–673. Morgan Kaufmann, San Francisco (2002)Google Scholar
  62. 62.
    Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., Grunert da Fonseca, V.: Performance assessment of multiobjective optimizers: an analysis and review. IEEE Trans. Evol. Comput. 7(2), 117–132 (2003)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  • Luis Martí
    • 1
  • Jesús García
    • 1
  • Antonio Berlanga
    • 1
  • José M. Molina
    • 1
  1. 1.Group of Applied Artificial IntelligenceUniversidad Carlos III de MadridMadridSpain

Personalised recommendations