Abstract
This paper describes and tests an approach to improve the temporal processing capabilities of the neuroevolution of augmenting topologies (NEAT) algorithm. This algorithm is quite popular within the robotics community for the production of trained neural networks without having to determine a priori their size and topology. The main drawback of the traditional NEAT algorithm is that, even though it can implement recurrent synaptic connections, which allow it to perform some time related processing tasks, its capabilities are rather limited, especially when dealing with precise time dependent phenomena. NEAT’s ability to capture the underlying dynamics that correspond to complex time series still has a lot of room for improvement. To address this issue, the paper describes a new implementation of the NEAT algorithm that is able to generate artificial neural networks (ANNs) with trainable time delayed synapses in addition to its previous capacities. We show that this approach, called \(\uptau \)-NEAT improves the behavior of the neural networks obtained when dealing with complex time related processes. Several examples are presented, both dealing with the generation of ANNs that are able to produce complex theoretical signals such as chaotic signals or real data series, as in the case of the monthly number of international airline passengers or monthly \(\hbox {CO}_{2}\) concentrations. In these examples, \(\uptau \)-NEAT clearly improves over the traditional NEAT algorithm in these tasks. A final example of the integration of this approach within a robot cognitive mechanism is also presented, showing the clear improvements it could provide in the modeling required for many cognitive processes.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Byrne MD (2003) Cognitive architecture. The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications, pp 97–117
Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C (2009) Cognitive developmental robotics: a survey. IEEE Trans Auton Ment Dev 1(1):12–34
Weng J (2007) On developmental mental architectures. Neurocomputing 70(13–15):2303–2323
Floreano D, Dürr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):47–62
Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447
Bellas F, Becerra JA, Duro RJ (2009) Using promoters and functional introns in genetic algorithms for neuroevolutionary learning in non-stationary problems. Neurocomputing 72:2134–2145
Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127
Stanley KO, Miikkulainen R (2002) Efficient evolution of neural networks topologies. In: Proceedings of the 2002 congress on evolutionary computation (CEC’02), pp 569–577
Wang G, Cheng G, Carr TR (2013) The application of improved NeuroEvolution of augmenting topologies neural network in Marcellus Shale lithofacies prediction. Comput Geosci 54:50–65
Chen L, Alahakoon D (2006) NeuroEvolution of augmenting topologies with learning for data classification. In: Information and automation, 2006. ICIA 2006. International conference on. IEEE, pp 367–371
Krčah P (2008) Towards efficient evolution of morphology and control. In: GECCO’08: proceedings of the 10th annual conference on genetic and evolutionary computation 2008, pp 287–288
Stanley KO, Bryant BD, Miikkulainen R (2005) Real-time neuroevolution in the NERO video game. IEEE Trans Evol Comput 9(6):653–668
Raffe WL, Zambetta F, Li X (2013) Neuroevolution of content layout in the PCG: angry bots video game. In: 2013 IEEE congress on evolutionary computation CEC, pp 673–680
Cardamone L, Loiacono D, Lanzi PL (2009) Evolving competitive car controllers for racing games with neuroevolution. In: Proceedings of the 11th annual genetic and evolutionary computation conference, GECCO-2009, pp 1179–1186
Kohl N, Stanley K, Miikkulainen R, Samples M, Sherony R (2006) Evolving a real-world vehicle warning system. In: GECCO 2006—genetic and evolutionary computation conference, vol 2, pp 1681–1688
Stanley KO, Miikkulainen R (2004) Competitive coevolution through evolutionary complexification. J Artif Intell Res 21:63–100
Gers FA, Schraudolph N, Schmidhuber J (2003) Learning precise timing with LSTM recurrent networks. J Mach Learn Res 3:115–143
Renart A (2013) Recurrent networks learn to tell time. Nat Neurosci 16:772–774
Waibel A, Hanazawa T, Hinton G, Shikano K, Lang KJ (1989) Phoneme recognition using time-delay neural networks. IEEE Trans Acoust Speech Signal Process 37(3):328–339
Duro RJ, Reyes JS (1999) Discrete-time backpropagation for training synaptic delay-based artificial neural networks. IEEE Trans Neural Netw 10(4):779–789
Marom E, Saad D, Cohen B (1997) Efficient training of recurrent neural network with time delays. Neural Netw 10(1):51–59
Kim S-S (1998) Time-delay recurrent neural network for temporal correlations and prediction. Neurocomputing 20(1–3):253–263
Boné R, de Crucianu M, Beauville JP (2002) Learning long-term dependencies by the selective addition of time-delayed connections to recurrent neural network. Neurocomputing 48(1–4):229–250
Mañé R (1981) On the dimension of the compact invariant sets of certain non-linear maps. Dyn Syst Turbul 898:230–242
Takens F (1985) On the numerical determination of the dimension of an attractor. Dyn Syst Bifurc 1125:99–106
Wang Y, Kim S-P, Principe JC (2005) Comparison of TDNN training algorithms in brain machine interfaces. In: 2005 IEEE international joint conference on neural networks. IJCNN ’05. Proceedings, vol 4, pp 2459–2462
Santos J, Duro RJ, Becerra JA, Crespo JL, Bellas F (2001) Considerations in the application of evolution to the generation of robot controllers. Inf Sci 133:127–148
Caamaño P, Bellas F, Duro RJ (2015) \(\uptau \)-Neat: initial experiments in precise temporal processing through neuroevolution. Neurocomputing 150:43–49
Caamano P, Bellas F, Duro RJ (2014) Augmenting the NEAT algorithm to improve its temporal processing capabilities. In: 2014 international joint conference on neural networks (IJCNN), pp 1467–1473
Michalewicz Z (1996) Genetic algorithms+ data structures= evolution programs. Springer, Berlin
Home A (2005–2010) ANJI: another NEAT java implementation. http://anji.sourceforge.net/
Thoning KW, Tans PP, Komhyr WD (1989) Atmospheric carbon dioxide at Mauna Loa Observatory: 2, analysis of the NOAA GMCC data. J Geophys Res Atmos 94(D6):8549–8565
Box GEP, Jenkins GM (1976) Time series analysis, forecasting and control. Holden-Day, San Francisco
Bellas F, Duro RJ, Faina A, Souto D (2010) Multilevel darwinist brain (MDB): artifcial evolution in a cognitive architecture for real robots. IEEE Trans Auton Ment Dev 2(4):340–354
Bellas F, Caamaño P, Faiña A, Duro RJ (2014) Dynamic learning in cognitive robotics through a procedural long term memory. Evol Syst 5(1):49–63
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Caamaño, P., Salgado, R., Bellas, F. et al. Introducing Synaptic Delays in the NEAT Algorithm to Improve Modelling in Cognitive Robotics. Neural Process Lett 43, 479–504 (2016). https://doi.org/10.1007/s11063-015-9426-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-015-9426-5