Evolutionary Intelligence

, Volume 1, Issue 4, pp 233–251

Evolution of internal dynamics for neural network nodes

  • David Montana
  • Eric VanWyk
  • Marshall Brinn
  • Joshua Montana
  • Stephen Milligan
Research Paper

DOI: 10.1007/s12065-009-0017-0

Cite this article as:
Montana, D., VanWyk, E., Brinn, M. et al. Evol. Intel. (2009) 1: 233. doi:10.1007/s12065-009-0017-0

Abstract

Most artificial neural networks have nodes that apply a simple static transfer function, such as a sigmoid or gaussian, to their accumulated inputs. This contrasts with biological neurons, whose transfer functions are dynamic and driven by a rich internal structure. Our artificial neural network approach, which we call state-enhanced neural networks, uses nodes with dynamic transfer functions based on n-dimensional real-valued internal state. This internal state provides the nodes with memory of past inputs and computations. The state update rules, which determine the internal dynamics of a node, are optimized by an evolutionary algorithm to fit a particular task and environment. We demonstrate the effectiveness of the approach in comparison to certain types of recurrent neural networks using a suite of partially observable Markov decision processes as test problems. These problems involve both sequence detection and simulated mice in mazes, and include four advanced benchmarks proposed by other researchers.

Keywords

State-enhanced neural networks Neuroevolution POMDP Dynamic neuron model 

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • David Montana
    • 1
  • Eric VanWyk
    • 1
  • Marshall Brinn
    • 1
  • Joshua Montana
    • 1
  • Stephen Milligan
    • 1
  1. 1.BBN TechnologiesCambridgeUSA