Advertisement

Introducing Long Term Memory in an ANN based Multilevel Darwinist Brain

  • F. Bellas
  • R. J. Duro
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2686)

Abstract

This paper deals with the introduction of long term memory in a Multilevel Darwinist Brain (MDB) structure based on Artificial Neural Networks and its implications on the capability of adapting to new environments and recognizing previously explored ones by autonomous robots. The introduction of long term memory greatly enhances the ability of the organisms that implement the MDB to deal with changing environments and at the same time recover from failures and changes in configurations. The paper describes the mechanism, introduces the long term mermoy within it and provides some examples of its operation both in theoretical problems and on a real robot whose perceptual and actuation mechanisms are changed periodically.

Keywords

Mean Square Error Long Term Memory Internal Model Term Memory Cognitive Mechanism 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    F. Bellas, J.A. Becerra, R. J. Duro, Using evolution for thinking and deciding, Proceedings WSES2001, pp 6161–6166.Google Scholar
  2. 2.
    F. Bellas, A. Lamas, R.J. Duro, Multilevel Darwinist Brain and Autonomously Learning to Walk, Proceedings CIRAS2001, pp 392–398.Google Scholar
  3. 3.
    M. Conrad, Evolutionary Learning Circuits, J. Theor. Biol. 46, 1974, pp. 167.CrossRefGoogle Scholar
  4. 4.
    M. Conrad, Complementary Molecular Models of Learning and Memory, BioSystems 8, 1976, pp. 119–138.CrossRefGoogle Scholar
  5. 5.
    J.P. Changeux, P. Courrege, A Theory of the Epigenesis of Neural Networks by Selective Stabilization of Sinapsis, Proc. Natl. Acad. Sci. USA, 70, 1973, pp. 2974–2978.zbMATHMathSciNetGoogle Scholar
  6. 6.
    J.P. Changeux and A. Danchin, Selective Stabilization of Developing Synapsis as a Mechanism for the Specification of Neural Networks, Nature 264, 1976, pp. 705–712.CrossRefGoogle Scholar
  7. 7.
    J.P. Changeux, T. Heidmann., and P. Patte, Learning by Selection. The Biology of Learning, P. Marler and H.S. Terrace (Eds.), Berlin, 1984, pp. 115–133.Google Scholar
  8. 8.
    G.M. Edelman, Neural Darwinism. The Theory of Neural Group Selection. Basic Books 1987.Google Scholar
  9. 9.
    P. Nordin, W. Banzhaf and M. Brameier., Evolution of a World Model for a Miniature Robot Using Genetic Programming, Robotics&Autonomous Systems, Vol. 25, pp. 105–116.Google Scholar
  10. 10.
    L. Steels, Emergent Functionality in Robotic Agents through On-line Evolution, Artificial Life IV, 1995, pp. 8–14.Google Scholar
  11. 11.
    F. Bellas, J.A. Becerra, R.J. Duro, Sampled Fitness Functions in Complex Problems (parts I and II), Proceedings JCIS 2002, pp 631–639.Google Scholar
  12. 12.
    F. Bellas, R.J. Duro, Modelling the world with statistically neutral PBGAs. Enhancement and real applications, Proceedings ICONIP 2002, pp 2093–2098.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • F. Bellas
    • 1
  • R. J. Duro
    • 1
  1. 1.Grupo de Sistemas Aut’onomosUniversidade da CoruñaCoruña

Personalised recommendations