Advertisement

Multi-Objective Optimization with an Adaptive Resonance Theory-Based Estimation of Distribution Algorithm: A Comparative Study

  • Luis Martí
  • Jesús García
  • Antonio Berlanga
  • José M. Molina
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6683)

Abstract

The introduction of learning to the search mechanisms of optimization algorithms has been nominated as one of the viable approaches when dealing with complex optimization problems, in particular with multi-objective ones. One of the forms of carrying out this hybridization process is by using multi-objective optimization estimation of distribution algorithms (MOEDAs). However, it has been pointed out that current MOEDAs have a intrinsic shortcoming in their model-building algorithms that hamper their performance.

In this work we argue that error-based learning, the class of learning most commonly used in MOEDAs is responsible for current MOEDA underachievement. We present adaptive resonance theory (ART) as a suitable learning paradigm alternative and present a novel algorithm called multi-objective ART-based EDA (MARTEDA) that uses a Gaussian ART neural network for model-building and an hypervolume-based selector as described for the HypE algorithm. In order to assert the improvement obtained by combining two cutting-edge approaches to optimization an extensive set of experiments are carried out. These experiments also test the scalability of MARTEDA as the number of objective functions increases.

Keywords

Multiobjective Optimization Distribution Algorithm Adaptive Resonance Theory Adaptive Resonance Theory Network Adaptive Resonance Theory Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Coello Coello, C.A., Lamont, G.B., Van Veldhuizen, D.A.: Evolutionary Algorithms for Solving Multi-Objective Problems. In: Genetic and Evolutionary Computation, 2nd edn. Springer, New York (2007)Google Scholar
  2. 2.
    Miettinen, K.: Nonlinear Multiobjective Optimization. International Series in Operations Research & Management Science, vol. 12. Kluwer, Norwell (1999)zbMATHGoogle Scholar
  3. 3.
    Pareto, V.: Cours D’Économie Politique. F. Rouge, Lausanne (1896)Google Scholar
  4. 4.
    Purshouse, R.C., Fleming, P.J.: On the evolutionary optimization of many conflicting objectives. IEEE Transactions on Evolutionary Computation 11(6), 770–784 (2007)CrossRefGoogle Scholar
  5. 5.
    Stewart, T., Bandte, O., Braun, H., Chakraborti, N., Ehrgott, M., Göbelt, M., Jin, Y., Nakayama, H., Poles, S., Di Stefano, D.: Real-world applications of multiobjective optimization. In: Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds.) Multiobjective Optimization. LNCS, vol. 5252, pp. 285–327. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Wagner, T., Beume, N., Naujoks, B.: Pareto-, aggregation-, and indicator-based methods in many-objective optimization. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) EMO 2007. LNCS, vol. 4403, pp. 742–756. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. 7.
    Bader, J., Deb, K., Zitzler, E.: Faster hypervolume-based search using Monte Carlo sampling. In: Beckmann, M., Künzi, H.P., Fandel, G., Trockel, W., Basile, A., Drexl, A., Dawid, H., Inderfurth, K., Kürsten, W., Schittko, U., Ehrgott, M., Naujoks, B., Stewart, T.J., Wallenius, J. (eds.) Multiple Criteria Decision Making for Sustainable Energy and Transportation Systems. LNEMS, vol. 634, pp. 313–326. Springer, Berlin (2010)CrossRefGoogle Scholar
  8. 8.
    Bader, J., Zitzler, E.: HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization. TIK Report 286, Computer Engineering and Networks Laboratory (TIK), ETH Zurich (2008)Google Scholar
  9. 9.
    Deb, K., Saxena, D.K.: Searching for Pareto–optimal solutions through dimensionality reduction for certain large–dimensional multi–objective optimization problems. In: 2006 IEEE Conference on Evolutionary Computation (CEC 2006), pp. 3352–3360. IEEE Press, Piscataway (2006)Google Scholar
  10. 10.
    Brockhoff, D., Zitzler, E.: Dimensionality reduction in multiobjective optimization: The minimum objective subset problem. In: Waldmann, K.H., Stocker, U.M. (eds.) Operations Research Proceedings 2006, pp. 423–429. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  11. 11.
    Brockhoff, D., Saxena, D.K., Deb, K., Zitzler, E.: On handling a large number of objectives a posteriori and during optimization. In: Knowles, J., Corne, D., Deb, K. (eds.) Multi–Objective Problem Solving from Nature: From Concepts to Applications. Natural Computing Series, pp. 377–403. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Corne, D.W.: Single objective = past, multiobjective = present,??? = future. In: Michalewicz, Z. (ed.) 2008 IEEE Conference on Evolutionary Computation (CEC), Part of 2008 IEEE World Congress on Computational Intelligence (WCCI 2008). IEEE Press, Piscataway (2008)Google Scholar
  13. 13.
    Michalski, R.S.: Learnable evolution model: Evolutionary processes guided by machine learning. Machine Learning 38, 9–40 (2000)CrossRefzbMATHGoogle Scholar
  14. 14.
    Sheri, G., Corne, D.W.: The simplest evolution/learning hybrid: LEM with KNN. In: IEEE World Congress on Computational Intelligence, pp. 3244–3251. IEEE Press, Hong Kong (2008)Google Scholar
  15. 15.
    Sheri, G., Corne, D.W.: Learning-assisted evolutionary search for scalable function optimization: LEM(ID3). In: IEEE World Congress on Computational Intelligence. IEEE Press, Barcelona (2010)Google Scholar
  16. 16.
    Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.): Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms. Springer, Heidelberg (2006)zbMATHGoogle Scholar
  17. 17.
    Pelikan, M., Sastry, K., Goldberg, D.E.: Multiobjective estimation of distribution algorithms. In: Pelikan, M., Sastry, K., Cantú-Paz, E. (eds.) Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications. SCI, pp. 223–248. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    Martí, L., García, J., Berlanga, A., Coello Coello, C.A., Molina, J.M.: On current model-building methods for multi-objective estimation of distribution algorithms: Shortcommings and directions for improvement. Technical Report GIAA2010E001, Grupo de Inteligencia Artificial Aplicada, Universidad Carlos III de Madrid, Colmenarejo, Spain (2010)Google Scholar
  19. 19.
    Grossberg, S.: Studies of Mind and Brain: Neural Principles of Learning, Perception, Development, Cognition, and Motor Control. Reidel, Boston (1982)CrossRefzbMATHGoogle Scholar
  20. 20.
    Sarle, W.S.: Why statisticians should not FART. Technical report, SAS Institute, Cary, NC (1995)Google Scholar
  21. 21.
    Williamson, J.R.: Gaussian ARTMAP: A neural network for fast incremental learning of noisy multidimensional maps. Neural Networks 9, 881–897 (1996)CrossRefGoogle Scholar
  22. 22.
    Martí, L., García, J., Berlanga, A., Molina, J.M.: Moving away from error-based learning in multi-objective estimation of distribution algorithms. In: Branke, J., Alba, E., Arnold, D., Bongard, J., Brabazon, A., Butz, M.V., Clune, J., Cohen, M., Deb, K., Engelbrecht, A., Krasnogor, N., Miller, J., O’Neill, M., Sastry, K., Thierens, D., Vanneschi, L., van Hemert, J., Witt, C. (eds.) GECCO 2010: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, pp. 545–546. ACM Press, New York (2010)Google Scholar
  23. 23.
    Ahn, C.W., Ramakrishna, R.S.: Multiobjective real-coded Bayesian optimization algorithm revisited: Diversity preservation. In: GECCO 2007: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, pp. 593–600. ACM Press, New York (2007)Google Scholar
  24. 24.
    Shapiro, J.: Diversity loss in general estimation of distribution algorithms. In: Runarsson, T.P., Beyer, H.-G., Burke, E.K., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 92–101. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  25. 25.
    Yuan, B., Gallagher, M.: On the importance of diversity maintenance in estimation of distribution algorithms. In: GECCO 2005: Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, pp. 719–726. ACM Press, New York (2005)Google Scholar
  26. 26.
    Peña, J.M., Robles, V., Larrañaga, P., Herves, V., Rosales, F., Pérez, M.S.: GA-EDA: Hybrid evolutionary algorithm using genetic and estimation of distribution algorithms. In: Orchard, B., Yang, C., Ali, M. (eds.) IEA/AIE 2004. LNCS (LNAI), vol. 3029, pp. 361–371. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  27. 27.
    Zhang, Q., Sun, J., Tsang, E.: An evolutionary algorithm with guided mutation for the maximum clique problem. IEEE Transactions on Evolutionary Computation 9(2), 192–200 (2005)CrossRefGoogle Scholar
  28. 28.
    Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., Grunert da Fonseca, V.: Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation 7(2), 117–132 (2003)CrossRefGoogle Scholar
  29. 29.
    While, L., Hingston, P., Barone, L., Huband, S.: A faster algorithm for calculating hypervolume. IEEE Transactions on Evolutionary Computation 10(1), 29–38 (2006)CrossRefGoogle Scholar
  30. 30.
    Fonseca, C.M., Paquete, L., López-Ibánez, M.: An improved dimension–sweep algorithm for the hypervolume indicator. In: 2006 IEEE Congress on Evolutionary Computation (CEC 2006), pp. 1157–1163 (2006)Google Scholar
  31. 31.
    Beume, N., Rudolph, G.: Faster S–metric calculation by considering dominated hypervolume as Klee’s measure problem. In: Kovalerchuk, B. (ed.) Proceedings of the Second IASTED International Conference on Computational Intelligence, pp. 233–238. IASTED/ACTA Press (2006)Google Scholar
  32. 32.
    Beume, N.: S–metric calculation by considering dominated hypervolume as Klee’s measure problem. Evolutionary Computation 17(4), 477–492 (2009); PMID: 19916778CrossRefGoogle Scholar
  33. 33.
    Bringmann, K., Friedrich, T.: Approximating the volume of unions and intersections of high–dimensional geometric objects. Computational Geometry 43(6-7), 601–610 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Papadimitriou, C.M.: Computational Complexity. Addison-Wesley, Reading (1994)zbMATHGoogle Scholar
  35. 35.
    Deolalikar, V.: P≠NP. Technical report, Hewlett Packard Research Labs, Palo Alto, CA, USA (2010)Google Scholar
  36. 36.
    Box, G.E.P., Muller, M.E.: A note on the generation of random normal deviates. Annals of Mathematical Statistics 29, 610–611 (1958)CrossRefzbMATHGoogle Scholar
  37. 37.
    Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation 10(5), 477–506 (2006)CrossRefzbMATHGoogle Scholar
  38. 38.
    Martí, L., García, J., Berlanga, A., Molina, J.M.: Introducing MONEDA: Scalable multiobjective optimization with a neural estimation of distribution algorithm. In: Keizer, M., Antoniol, G., Congdon, C., Deb, K., Doerr, B., Hansen, N., Holmes, J., Hornby, G., Howard, D., Kennedy, J., Kumar, S., Lobo, F., Miller, J., Moore, J., Neumann, F., Pelikan, M., Pollack, J., Sastry, K., Stanley, K., Stoica, A., Talbi, E.G., Wegener, I. (eds.) GECCO 2008: 10th Annual Conference on Genetic and Evolutionary Computation, pp. 689–696. ACM Press, New York (2008); EMO Track “Best Paper” NomineeGoogle Scholar
  39. 39.
    Bosman, P.A.N., Thierens, D.: The naive MIDEA: A baseline multi–objective EA. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 428–442. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  40. 40.
    Ahn, C.W.: Advances in Evolutionary Algorithms. In: Theory, Design and Practice. Springer, Heidelberg (2006) ISBN 3-540-31758-9Google Scholar
  41. 41.
    Beume, N., Naujoks, B., Emmerich, M.: SMS–EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research 181(3), 1653–1669 (2007)CrossRefzbMATHGoogle Scholar
  42. 42.
    Knowles, J., Thiele, L., Zitzler, E.: A tutorial on the performance assessment of stochastic multiobjective optimizers. TIK Report 214, Computer Engineering and Networks Laboratory (TIK), ETH Zurich (2006)Google Scholar
  43. 43.
    Chambers, J., Cleveland, W., Kleiner, B., Tukey, P.: Graphical Methods for Data Analysis. Wadsworth, Belmont (1983)zbMATHGoogle Scholar
  44. 44.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics 18, 50–60 (1947)MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Bader, J.: Hypervolume-Based Search for Multiobjective Optimization: Theory and Methods. PhD thesis, ETH Zurich, Switzerland (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Luis Martí
    • 1
  • Jesús García
    • 1
  • Antonio Berlanga
    • 1
  • José M. Molina
    • 1
  1. 1.Group of Applied Artificial IntelligenceUniversidad Carlos III de MadridMadridSpain

Personalised recommendations