Abstract
A novel approach to topology and weight evolving artificial neural networks (TWEANNs) is presented. Compared with previous TWEANNs, this method has two major characteristics. First, a set of genetic operations may be designed without recombination because it often generates an offspring whose fitness value is considerably worse than its parents. Instead, two topological mutations whose effect on fitness value is assumed to be nearly neutral are provided in the genetic operations set. Second, a new encoding technique is introduced to define a string as a set of substrings called operons. To examine our approach, computer simulations were conducted using the standard reinforcement learning problem known as the double pole balancing without velocity information. The results obtained were compared with NEAT results, which is recognised as one of the most powerful techniques in TWEANNs. It was found that our proposed approach yields competitive results, especially when the problem is difficult.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Angeline, P.J., Sauders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans. Neural Networks 5(1), 54–65 (1994)
Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, New York (1989)
Gomez, F., Miikkulainen, R.: Solving non-Markovian control tasks with neuroevolution. In: Dean, T. (ed.) Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pp. 1356–1361. Morgan Kaufmann, San Francisco (1999)
Gomez, F., Miikkulainen, R.: Learning robust nonlinear control with neuroevolution. Technical Report AI02-292, Department of Computer Science, University of Texas at Austin, Austin, Texas (2002)
Grauau, F., Whitely, D., Pyeatt, L.: A comparison between cellular encoding and direct encoding for genetic neural networks. In: Koza, J.R., et al. (eds.) Genetic Programming 1996: Proceedings of the First Annual Conference, pp. 81–89 (1996)
Kaebling, L.P., Littman, M., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence 4, 237–285 (1996)
Nolfi, S., Floreano, D.: Evolutionary Robotics. MIT Press, Cambridge (2000)
Stanley, K.O., Miikkulainen, R.: Evolving Neural Networks Through Augmenting Topologies. Evolutionary Computation 10(2), 99–127 (2002)
Stanley, K.O., Miikkulainen, R.: Competitive Coevolution Through Evolutionary Complexification. Journal of Artificial Intelligence Research 21, 63–100 (2004)
Stanley, K.O.: http://www.cs.ucf.edu/~kstanley/
Yao, X.: Evolving Artificial Neural Networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)
Yao, X., Liu, Y.: A New Evolutionary System for Evolving Artificial Neural Networks. IEEE Transactions on Neural Networks 8(3), 694–713 (1997)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ohkura, K., Yasuda, T., Kawamatsu, Y., Matsumura, Y., Ueda, K. (2007). MBEANN: Mutation-Based Evolving Artificial Neural Networks . In: Almeida e Costa, F., Rocha, L.M., Costa, E., Harvey, I., Coutinho, A. (eds) Advances in Artificial Life. ECAL 2007. Lecture Notes in Computer Science(), vol 4648. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74913-4_94
Download citation
DOI: https://doi.org/10.1007/978-3-540-74913-4_94
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74912-7
Online ISBN: 978-3-540-74913-4
eBook Packages: Computer ScienceComputer Science (R0)