EvoWorkshops 2003: Applications of Evolutionary Computing pp 638-650 | Cite as

Evolving Symbolic Controllers

  • Nicolas Godzik
  • Marc Schoenauer
  • Michèle Sebag
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2611)

Abstract

The idea of symbolic controllers tries to bridge the gap between the top-down manual design of the controller architecture, as advocated in Brooks’ subsumption architecture, and the bottom-up designerfree approach that is now standard within the Evolutionary Robotics community. The designer provides a set of elementary behavior, and evolution is given the goal of assembling them to solve complex tasks. Two experiments are presented, demonstrating the efficiency and showing the recursiveness of this approach. In particular, the sensitivity with respect to the proposed elementary behaviors, and the robustness w.r.t. generalization of the resulting controllers are studied in detail.

Keywords

Reinforcement Learning Hide Neuron Obstacle Avoidance Recharge Area Basic Behavior 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    P. J. Bentley, editor. Evolutionary Design by Computers. Morgan Kaufman Publishers Inc., 1999.Google Scholar
  2. 2.
    R. A. Brooks. A robust layered control system for a mobile robot. Journal of Robotics and Automation, 2(1):14–23, 1986.Google Scholar
  3. 3.
    R. A. Brooks. How to build complete creatures rather than isolated cognitive simulators. In Kurt VanLehn, editor, Architectures for Intelligence: The 22nd Carnegie Mellon Symp. on Cognition, pages 225–239. Lawrence Erlbaum Associates, 1991.Google Scholar
  4. 4.
    R. Dawkins. The blind watchmaker. Norton, W. W. and Company, 1988.Google Scholar
  5. 5.
    D. Floreano and F. Mondada. Evolution of homing navigation in a real mobile robot. IEEE Transactions on Systems, Man, and Cybernetics, 26:396–407, 1994.Google Scholar
  6. 6.
    I. Harvey. The Artificial Evolution of Adaptive Behaviour. PhD thesis, University of Sussex, 1993.Google Scholar
  7. 7.
    M. Humphrys.Action Selection Methods Using Reinforcement Learning. PhD thesis, University of Cambridge, 1997.Google Scholar
  8. 8.
    Leslie Pack Kaelbling, Michael L. Littman, and Andrew P. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996.Google Scholar
  9. 9.
    M. Keijzer, J. J. Merelo, G. Romero, and M. Schoenauer. Evolving objects: a general purpose evolutionary computation library. In P. Collet et al., editors, Artificial Evolution’01, pages 229–241. Springer Verlag, LNCS 2310, 2002.CrossRefGoogle Scholar
  10. 10.
    P. Maes. The dynamics of action selection. In Proceedings of the 11th International Joint Conference on Artificial Intelligence, 1989.Google Scholar
  11. 11.
    J.R. Millan, D. Posenato, and E. Dedieu. Continuous-action Q-learning. Machine Learning Journal, 49(23):247–265, 2002.MATHCrossRefGoogle Scholar
  12. 12.
    S. Nol. and D. Floreano. How co-evolution can enhance the adaptive power of artificial evolution: implications for evolutionary robotics. In P. Husbands and J.A. Meyer, editors, Proceedings of EvoRobot98, pages 22–38. Springer Verlag, 1998.Google Scholar
  13. 13.
    S. Nol. and D. Floreano. Evolutionary Robotics. MIT Press, 2000.Google Scholar
  14. 14.
    H.-P. Schwefel. Numerical Optimization of Computer Models. John Wiley & Sons, New-York, 1981. 1995-2nd edition.MATHGoogle Scholar
  15. 15.
    R.S. Sutton and A. G. Barto. Reinforcement learning. MIT Press, 1998.Google Scholar
  16. 16.
    E. Tuci, I. Harvey, and M. Quinn. Evolving integrated controllers for autonomous learning robots using dynamic neural networks. In B. Hallam et al., editor, Proc. of SAB’02. MIT Press, 2002.Google Scholar
  17. 17.
    V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.Google Scholar
  18. 18.
    B.M. Yamauchi and R.D. Beer. Integrating reactive, sequential, and learning behavior using dynamical neural network. In D. Cli. et al., editor, Proc. SAB’94. MIT Press, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Nicolas Godzik
    • 1
  • Marc Schoenauer
    • 1
  • Michèle Sebag
    • 2
  1. 1.Projet FractalesINRIA RocquencourtFrance
  2. 2.LRIUniversité Paris-SudFrance

Personalised recommendations