Skip to main content

Learning routines

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1037))

Abstract

Routine interactions in the world of an autonomous agent are a major source of learning for the agent. In my approach an agent interacts in the world in several different ways, from cognitive to automatic. I show that an agent can learn and also improve its routine interactions in its different modes of interaction in the world.

I present a formalism and use for a goal structure known as goal sketch [11]. Rewards and punishments generated from a goal sketch which indicate progress in goal satisfaction are used to improve automatic interactions and enhance agent's strategies and concepts about action. I will discuss my experiments with a physical robot that uses a goal sketch in order to generate rewards and punishments which are then used in improving robot skills and discovering new actions.

This work was supported in part by Equipment Grant No. EDUD-US-932022 from SUN Microsystems Computer Corporation and in part by NASA under contract NAS 9-19004.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Henry Hexmoor. What are Routines good for? In AAAI Fall Symposium, New Orleans, LA, 1994. Also available from SUNY at Buffalo, CS Department TR 94-07.

    Google Scholar 

  2. Henry Hexmoor, Guido Caicedo, Frank Bidwell, and Stuart Shapiro. Air battle simulation: An agent with conscious and unconscious layers. In University at Buffalo Graduate Conference in Computer Science 93 (TR93-14). Dept. of Computer Science, SUNY at Buffalo, New York, 1993.

    Google Scholar 

  3. Henry Hexmoor, Johan Lammens, Guido Caicedo, and Stuart C. Shapiro. Behavior Based AI, Cognitive Processes, and Emergent Behaviors in Autonomous Agents. In G. Rzevski, J. Pastor, and R. Adey, editors, Applications of AI in Engineering VIII, Vol.2, Applications and Techniques, pages 447–461. CMI/Elsevier, 1993.

    Google Scholar 

  4. Henry Hexmoor, Johan Lammens, and Stuart Shapiro. An autonomous agent architecture for integrating “unconscious” and “ conscious”, reasoned behaviors. In Computer Architectures for Machine Perception, 1993.

    Google Scholar 

  5. Henry Hexmoor, Johan Lammens, and Stuart C. Shapiro. Embodiment in GLAIR: A Grounded Layered Architecture with Integrated Reasoning. In Florida AI Research Symposium, pages 325–329, 1993. also available as SUNYAB CS Dept. TR93-10.

    Google Scholar 

  6. Henry Hexmoor and Stuart C. Shapiro. Building skillful agents. In Ken Ford, Robert Hoffman, and Paul Feltivich, editors, Human and Machine Expertise in Context, 1995. In press.

    Google Scholar 

  7. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning, and teaching. In Machine Learning 8, pages 293–321, 1992.

    Google Scholar 

  8. Tom Mitchell and Sebastian Thrun. Explanation-based neural network learning for robot control. In J. Cowen S. J. Hansen and C.L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 287–294. Morgan Kaufmann, 1993.

    Google Scholar 

  9. Richard M. Shiffrin and Walter Schneider. Controlled and Automatic Human Information Processing: II. Perceptual Learning, Automatic Attending, and a General Theory. volume 84-2, pages 127–190. 1977.

    Google Scholar 

  10. Robert Siegler and Kevin Crowley. Constraints on learning in nonpriviledged domains. In Cognitive Psychology, volume 27, pages 194–226. 1994.

    Google Scholar 

  11. Robert Siegler and Eric Jenkins. How Children Discover New Strategies. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Michael Wooldridge Jörg P. Müller Milind Tambe

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hexmoor, H.H. (1996). Learning routines. In: Wooldridge, M., Müller, J.P., Tambe, M. (eds) Intelligent Agents II Agent Theories, Architectures, and Languages. ATAL 1995. Lecture Notes in Computer Science, vol 1037. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3540608052_61

Download citation

  • DOI: https://doi.org/10.1007/3540608052_61

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60805-9

  • Online ISBN: 978-3-540-49594-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics