Advertisement

Efficient Learning of Neural Networks with Evolutionary Algorithms

  • Nils T. Siebel
  • Jochen Krause
  • Gerald Sommer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4713)

Abstract

In this article we present EANT, a method that creates neural networks (NNs) by evolutionary reinforcement learning. The structure of NNs is developed using mutation operators, starting from a minimal structure. Their parameters are optimised using CMA-ES. EANT can create NNs that are very specialised; they achieve a very good performance while being relatively small. This can be seen in experiments where our method competes with a different one, called NEAT, to create networks that control a robot in a visual servoing scenario.

Keywords

Evolutionary Algorithm Strategy Parameter Robot Movement Image Error Structural Mutation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hornik, K., Stinchcombe, M.B., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)CrossRefGoogle Scholar
  2. 2.
    Bellman, R.E.: Adaptive Control Processes. Princeton University Press, Princeton, USA (1961)zbMATHGoogle Scholar
  3. 3.
    Rojas, R.: Neural Networks - A Systematic Introduction. Springer, Berlin, Germany (1996)zbMATHGoogle Scholar
  4. 4.
    Kassahun, Y., Sommer, G.: Efficient reinforcement learning through evolutionary acquisition of neural topologies. In: Proceedings of the 13th European Symposium on Artificial Neural Networks (ESANN 2005), Bruges, Belgium, pp. 259–266 (2005)Google Scholar
  5. 5.
    Siebel, N.T., Kassahun, Y.: Learning neural networks for visual servoing using evolutionary methods. In: Proceedings of the 6th International Conference on Hybrid Intelligent Systems (HIS 2006), Auckland, New Zealand, 6 (4 pages) (2006)Google Scholar
  6. 6.
    Eiben, Á.E., Smith, J.E.: Introduction to Evolutionary Computing. Springer, Berlin, Germany (2003)zbMATHGoogle Scholar
  7. 7.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge, USA (1998)Google Scholar
  8. 8.
    Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar
  9. 9.
    Yao, X., Liu, Y.: A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks 8(3), 694–713 (1997)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Angeline, P.J., Saunders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks 5, 54–65 (1994)CrossRefGoogle Scholar
  11. 11.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002)CrossRefGoogle Scholar
  12. 12.
    Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  14. 14.
    Weiss, L.E., Sanderson, A.C., Neuman, C.P.: Dynamic sensor-based control of robots with visual feedback. IEEE Journal of Robotics and Automation 3(5), 404–417 (1987)CrossRefGoogle Scholar
  15. 15.
    Hutchinson, S., Hager, G., Corke, P.: A tutorial on visual servo control. Tutorial notes, Yale University, New Haven, USA (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Nils T. Siebel
    • 1
  • Jochen Krause
    • 1
  • Gerald Sommer
    • 1
  1. 1.Cognitive Systems Group, Institute of Computer Science, Christian-Albrechts-University of Kiel, Olshausenstr. 40, 24098 KielGermany

Personalised recommendations