Evolutionary Intelligence

, Volume 1, Issue 4, pp 233–251 | Cite as

Evolution of internal dynamics for neural network nodes

  • David Montana
  • Eric VanWyk
  • Marshall Brinn
  • Joshua Montana
  • Stephen Milligan
Research Paper

Abstract

Most artificial neural networks have nodes that apply a simple static transfer function, such as a sigmoid or gaussian, to their accumulated inputs. This contrasts with biological neurons, whose transfer functions are dynamic and driven by a rich internal structure. Our artificial neural network approach, which we call state-enhanced neural networks, uses nodes with dynamic transfer functions based on n-dimensional real-valued internal state. This internal state provides the nodes with memory of past inputs and computations. The state update rules, which determine the internal dynamics of a node, are optimized by an evolutionary algorithm to fit a particular task and environment. We demonstrate the effectiveness of the approach in comparison to certain types of recurrent neural networks using a suite of partially observable Markov decision processes as test problems. These problems involve both sequence detection and simulated mice in mazes, and include four advanced benchmarks proposed by other researchers.

Keywords

State-enhanced neural networks Neuroevolution POMDP Dynamic neuron model 

References

  1. 1.
    Aberdeen D (2003) A (revised) survey of approximate methods for solving partially observable markov decision processes. Technical report, National ICT Australia, CanberraGoogle Scholar
  2. 2.
    Abraham A (2004) Meta learning evolutionary artificial neural networks. Neurocomputing 56:1–38CrossRefGoogle Scholar
  3. 3.
    Aihara K, Takabe T, Toyoda M (1990) Chaotic neural networks. Phys Lett A 144(6):333–340CrossRefMathSciNetGoogle Scholar
  4. 4.
    Astor J, Adami C (2000) A developmental model for the evolution of artificial neural networks. Artif Life 6(3):189–218CrossRefGoogle Scholar
  5. 5.
    Asuncion A, Newman D (2007) UCI machine learning repository. http://www.ics.uci.edu/∼mlearn/MLRepository.html
  6. 6.
    Bakker B, Zhumatiy V, Gruener G, Schmidhuber J (2003) A robot that reinforcement-learns to identify and memorize important previous observations. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS2003), pp 430–435Google Scholar
  7. 7.
    Baxter J (1992) The evolution of learning algorithms for artificial neural networks. In: Green D, Bossomaier T (eds) Complex systems. IOS Press, AmsterdamGoogle Scholar
  8. 8.
    Blynel J, Floreano D (2003) Exploring the t-maze: evolving learning-like robot behaviors using ctrnns. In: Proceedings of the second European workshop on evolutionary robotics, pp 593–604Google Scholar
  9. 9.
    Bongard J (2002) Evolving modular genetic regulatory networks. In: Proceedings of the 2002 congress on evolutionary computation, pp 1872–1877Google Scholar
  10. 10.
    Cangelosi A, Parisi D, Nolfi S (1994) Cell division and migration in a ’genotype’ for neural networks. Network 5:497–515MATHCrossRefGoogle Scholar
  11. 11.
    Chalmers D (1990) The evolution of learning: an experiment in genetic connectionism. In: Proceedings of the 1990 Connectionist Models Summer School, pp 81–90Google Scholar
  12. 12.
    Dellaert F, Beer R (1996) A developmental model for the evolution of complete autonomous agents. In: Proceedings of the fourth international conference on simulation of adaptive behavior, pp 393–401Google Scholar
  13. 13.
    D’Silva TRJ, Chrien M, Stanley K, Miikkulainen R (2005). Retaining learned behavior during real-time neuroevolution. In: Proceedings of the artificial intelligence and interactive digital entertainment conference (AIIDE)Google Scholar
  14. 14.
    Eggenberger-Hotz P, Gomez G, Pfeifer R (2002) Evolving the morphology of a neural network for controlling a foveating retina and its test on a real robot. In: Proceedings of the eighth international workshop on the synthesis and simulation of living systems (Artificial Life VIII), pp 243–251Google Scholar
  15. 15.
    Floreano D, Durr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):47–62CrossRefGoogle Scholar
  16. 16.
    Floreano D, Mattiussi C (2001) Evolution of spiking neural controllers for autonomous vision-based robots. In: Gomi T (ed) Evolutionary robotics: from intelligent robotics to artificial life. Springer, TokyoGoogle Scholar
  17. 17.
    Funahashi K, Nakamura Y (1993) Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw 6(6):801–806CrossRefGoogle Scholar
  18. 18.
    Gers F, Schmidhuber J (2001) Lstm recurrent networks learn simple context free and context sensitive languages. IEEE Trans Neural Netw 12(6):1333–1340CrossRefGoogle Scholar
  19. 19.
    Gerstner W, Kistler W (2002) Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press, CambridgeGoogle Scholar
  20. 20.
    Gomez F, Miikkulainen R (1999) Solving non-markovian control tasks with neuroevolution. In: Proceedings of the international joint conference on artificial intelligence, pp 1356–1361Google Scholar
  21. 21.
    Gomez F, Schmidhuber J (2005) Co-evolving recurrent neurons learn deep memory POMDPs. In: Proceedings of the genetic and evolutionary computation conference (GECCO)Google Scholar
  22. 22.
    Gruau F (1995) Automatic definition of modular neural networks. Adapt Behav 3(2):151–183CrossRefGoogle Scholar
  23. 23.
    Harp S, Samad T, Guha A (1989) Towards the genetic synthesis of neural networks. In: Proceedings of the third international conference on genetic algorithms, pp 360–369Google Scholar
  24. 24.
    Haykin S (1999) Neural networks: a comprehensive foundation. Prentice-Hall, Englewood CliffsGoogle Scholar
  25. 25.
    Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRefGoogle Scholar
  26. 26.
    Hussain T (2004) Generic neural markup language: facilitating the design of theoretical neural network models. In: Proceedings of the IEEE international joint conference on neural networks, pp 235–242Google Scholar
  27. 27.
    Ichinose N, Aihara K, Kotani M (1993) An analysis on dynamics of pulse propagation networks. In: Proceedings of the international joint conference on neural networks, pp 2315–2318Google Scholar
  28. 28.
    Izhikevich E (2004) Which model to use for cortical spiking neurons?. IEEE Trans Neural Netw 15(5):1063–1070CrossRefGoogle Scholar
  29. 29.
    Jakobi N (1995) Harnessing morphogenesis. In: International conference on information processing in cells and tissues, pp 29–41Google Scholar
  30. 30.
    Jakobi N (1998) Evolutionary robotics and the radical envelope-of-noise hypothesis. Adapt Behav 6(2):325–368CrossRefGoogle Scholar
  31. 31.
    Kitano H (1990) Designing neural networks using genetic algorithms with graph generation system. Complex Syst 4:461–476MATHGoogle Scholar
  32. 32.
    Kumar S, Bentley P (2003) Computational embryology: past, present and future. Springer, New York, pp 461–477Google Scholar
  33. 33.
    Luke S (2002) An evolutionary computation and genetic programming system. http://cs.gmu.edu/eclab/projects/ecj/docs/
  34. 34.
    McCallum A (1993) Overcoming incomplete perception with utile distinction memory. In: International conference on machine learning, pp 190–196Google Scholar
  35. 35.
    Montana D, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: Proceedings of the international joint conference on artificial intelligence (IJCAI), pp 762–767Google Scholar
  36. 36.
    Montana D, VanWyk E, Brinn M, Montana J, Milligan S (2006) Genomic computing networks learn complex POMDPs. In: Proceedings of the genetic and evolutionary computation conference (GECCO), pp 233–234Google Scholar
  37. 37.
    Nolfi S, Miglino O, Parisi D (1994) Phenotypic plasticity in evolving neural networks. In: Proceedings of the internation conference from perception to action, pp 146–157Google Scholar
  38. 38.
    Osana Y, Hattori M, Hagiwara M (1996) Chaotic bidirectional associative memory. In: IEEE international conference on neural networks, pp 816–821Google Scholar
  39. 39.
    Pearlmutter B (1995) Gradient calculation for dynamic recurrent neural networks: a survey. IEEE Trans Neural Netw 6(5):1212–1228CrossRefGoogle Scholar
  40. 40.
    Pollack M, Ringuette M (1990) Introducing the Tileworld: experimentally evaluating agent architectures. In: Proceedings of the eighth national conference on artificial intelligence, pp 183–189Google Scholar
  41. 41.
    Russel S, Norvig P (1995) Artificial intelligence: a modern approach. Prentice-Hall, Englewood CliffsGoogle Scholar
  42. 42.
    Schmidhuber J, Wierstra D, Galiolo M, Gomez F (2007) Training recurrent networks by evolino. Neural Comput 19(3):757–779MATHCrossRefGoogle Scholar
  43. 43.
    Schmidhuber J, Wierstra D, Gomez F (2005) Evolino: hybrid neuroevolution/optimal linear search for sequence learning. In: Proceedings of the 19th international joint conference on artificial intelligence (IJCAI), pp 853–858Google Scholar
  44. 44.
    Stanley K, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127CrossRefGoogle Scholar
  45. 45.
    Stanley K, Miikkulainen R (2003) A taxonomy for artificial embryogeny. Artif Life 9(2):93–130CrossRefGoogle Scholar
  46. 46.
    Watkins C, Dayan P (1992) Q-learning. Mach Learn 8(3):279–292MATHGoogle Scholar
  47. 47.
    Whitley D (1989) Applying genetic algorithms to neural network learning. In: Proceedings of the seventh conference of the society of artificial intelligence and simulation of behavior, pp 137–144Google Scholar
  48. 48.
    Yamauchi B, Beer R (1994) Sequential behavior and learning in evolved dynamical neural networks. Adapt Behav 2(3):219–246CrossRefGoogle Scholar
  49. 49.
    Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • David Montana
    • 1
  • Eric VanWyk
    • 1
  • Marshall Brinn
    • 1
  • Joshua Montana
    • 1
  • Stephen Milligan
    • 1
  1. 1.BBN TechnologiesCambridgeUSA

Personalised recommendations