Abstract
Neuroevolution, the process of creating artificial neural networks through simulated evolution, can become impractical for arbitrarily complex problems requiring large or intricate neural network architectures. The modular feed forward neural network (MFFN) architecture decomposes a problem among a number of independent task specific neural networks, and is suggested here as a means of managing neuroevolution for complex problems. We present an algorithm for evolving MFFN architectures based on the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. The algorithm proposed here, denoted MFF-NEAT, outlines an approach to automatically evolving, attributing fitness values and combining the task specific networks in a principled manner.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Berthouze, L., Tijsseling, A.: A neural model for context-dependent sequence learning. Neural Process. Lett. 23, 27–45 (2006)
French, R.M.: Grid Information Services for Distributed Resource Sharing. In: Sixteenth Annual Conference of the Cognitive Science Society, pp. 335–340. Routledge (1994)
Happel, B.L.M., Murre, J.M.J.: Design and evolution of modular neural network architectures. Neural Networks 7, 985–1004 (1994)
Jacobs, R.A., Jordan, M.I., Barto, A.G.: Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Sci. 15, 219–250 (1991)
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3, 79–87 (1991)
Khare, V.R., Yao, X., Sendhoff, B., Jin, Y., Wersing, H.: Co-evolutionary modular neural networks for automatic problem decomposition. In: IEEE Congress on Evolutionary Computation, pp. 2691–2698. IEEE (2005)
Liu, Y., Yao, X., Higuchi, T.: Evolutionary Ensembles with Negative Correlation Learning. IEEE Trans. Evol. Comput. (2000)
Mouret, J.B., Doncieux, S.: Evolving modular neural-networks through exaptation. In: IEEE Congress on Evolutionary Computation, pp. 1570–1577. IEEE (2009)
Panait, L.: Theoretical convergence guarantees for cooperative coevolutionary algorithms. Evol. Comput. 18, 581–615 (2010)
Prechelt, L.: Proben1: A set of neural network benchmark problems and benchmarking rules. Fakultät für Informatik, Univ. Karlsruhe, Karlsruhe, Germany, Tech. Rep. 21 (1994)
Reisinger, J., Stanley, K.O., Miikkulainen, R.: Evolving Reusable Neural Modules. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 69–81. Springer, Heidelberg (2004)
Shang, Y., Wah, B.W.: Global optimization for neural network training. Computer 29, 45–54 (1996)
Stanley, K.O., Miikkulainen, R.: Competitive coevolution through evolutionary complexification. Artif. Intell. Res. (JAIR) 21, 63–100 (2004)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10, 99–127 (2002)
Staub, J.E.: The Grid: Crossover: Concepts and Applications in Genetics, Evolution, and Breeding: An Interactive Computer-Based Laboratory Manual. University of Wisconsin Press (1994)
Thrun, S.B., Bala, J.W., Bloedorn, E., Bratko, I., Cestnik, B., Cheng, J., De Jong, K., Dzeroski, S., Fahlman, S.E., Fisher, D., Hamann, R., Kaufman, K., Keller, S., Kononenko, I., Kreuziger, J., Michalski, R.S., Mitchell, T.M., Pachowicz, P., Reich, Y., Vafaie, H., Van de Velde, W., Wenzel, W., Wnek, J., Zhang, J.: The MONK’s problems: A Performance Comparison of Different Learning Algorithms. Computer Science Reports, CMU-CS-91-197, Carnegie Mellon University, Pittsburgh, PA (1991)
Wiskott, L., Rasch, M.J., Kempermann, G.: A functional hypothesis for adult hippocampal neurogenesis: avoidance of catastrophic interference in the dentate gyrus. Hippocampus 16, 329–343 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Manning, T., Walsh, P. (2012). Automatic Task Decomposition for the NeuroEvolution of Augmenting Topologies (NEAT) Algorithm. In: Giacobini, M., Vanneschi, L., Bush, W.S. (eds) Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics. EvoBIO 2012. Lecture Notes in Computer Science, vol 7246. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29066-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-29066-4_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-29065-7
Online ISBN: 978-3-642-29066-4
eBook Packages: Computer ScienceComputer Science (R0)