Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3562))

Abstract

In this paper we present a strategy for inducing a behavior in a real agent through a learning process with a human teacher. The agent creates internal models extracting information from the consequences of the actions it must carry out, and not just learning the task itself. The mechanism that permits this background learning process is the Multilevel Darwinist Brain, a cognitive mechanism that allows an autonomous agent to decide the actions it must apply in its environment in order to fulfill its motivations. It is a reinforcement based mechanism that uses evolutionary techniques to perform the on line learning of the models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Weng, J., Zhang, Y.: Developmental Robots - A New Paradigm. In: Proceedings Second International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, vol. 94, pp. 163–174 (2003)

    Google Scholar 

  2. Bakker, P., Kuniyoshi, Y.: Robot see, robot do: An overview of robot imitation, Autonomous Systems Section, Electrotechnical Laboratory. Tsukuba Science City, Japan (1996)

    Google Scholar 

  3. Voylesl, R., Khosla, P.: A multi-agent system for programming robotic agents by human demonstration. In: Proc. AI and Manufacturing Research Planning Workshop (1998)

    Google Scholar 

  4. Lauria, S., Bugmann, G., Kyriacou, T., Klein, E.: Mobile robot programming using natural language. Robotics and Autonomous Systems 38, 171–181 (2002)

    Article  Google Scholar 

  5. Nicolescu, M., Mataric, M.J.: Natural Methods for Robot Task Learning: Instructive Demonstration, Generalization and Practice. In: Proceedings, Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 241–248 (2003)

    Google Scholar 

  6. Schaal, S.: Learning from demonstration. Advances in Neural Information Processing Systems 9, 1040–1046 (1997)

    Google Scholar 

  7. Ullerstam, M.: Teaching Robots Behavior Patterns by Using Reinforcement Learning– How to Raise Pet Robots with a Remote Control, Master’s Thesis in Computer Science at the School of Computer Science and Engineering, Royal Institute of Technology (2004)

    Google Scholar 

  8. Asada, M., MacDorman, K.F., Ishiguro, H., Kuniyoshi, Y.: Cognitive Developmental Robotics As a New Paradigm for the Design of Humanoid Robots. Robotics and Autonomous System 37, 185–193 (2001)

    Article  MATH  Google Scholar 

  9. Changeux, J., Courrege, P., Danchin, A.: A Theory of the Epigenesis of Neural Networks by Selective Stabilization of Synapses. Proc.Nat. Acad. Sci. USA 70, 2974–2978 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  10. Conrad, M.: Evolutionary Learning Circuits. Theor. Biol. 46, 167–188 (1974)

    Article  Google Scholar 

  11. Edelman, G.: Neural Darwinism. The Theory of Neuronal Group Selection. Basic Books, New York (1987)

    Google Scholar 

  12. Bellas, F., Duro, R.J.: Multilevel Darwinist Brain in Robots: Initial Implementation. In: ICINCO 2004 Proceedings Book, vol. 2, pp. 25–32 (2004)

    Google Scholar 

  13. Bellas, F., Duro, R.J.: Introducing long term memory in an ann based multilevel darwinist brain, Computational methods in neural modeling, pp. 590–598. Springer, Heidelberg (2003)

    Google Scholar 

  14. Bellas, F., Duro, R.J.: Statistically neutral promoter based GA for evolution with dynamic fitness functions. In: Proceedings of the 2nd iasted international conference, pp. 335–340 (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bellas, F., Becerra, J.A., Duro, R.J. (2005). Induced Behavior in a Real Agent Using the Multilevel Darwinist Brain. In: Mira, J., Álvarez, J.R. (eds) Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach. IWINAC 2005. Lecture Notes in Computer Science, vol 3562. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11499305_44

Download citation

  • DOI: https://doi.org/10.1007/11499305_44

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26319-7

  • Online ISBN: 978-3-540-31673-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics