Skip to main content

A Characterization of Sapient Agents

  • Chapter
  • 365 Accesses

Abstract

This chapter presents a proposal to characterize sapient agents in terms of cognitive concepts and abilities. In particular, a sapient agent is considered a cognitive agent that learns its cognitive state and capabilities through experience. This characterization is based on formal concepts such as beliefs, goals, plans and reasoning rules, and formal techniques such as relational reinforcement learning. We identify several aspects of cognitive agents that can be evolved through learning and indicate how these aspects can be learned. Other important features such as the social environment, interaction with other agents or humans, and the ability to deal with emotions are also discussed. The chapter ends with directions for further research on sapient agents.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Boutilier, C., Reiter, R., and Price B. (2001). Symbolic dynamic programming for first-order MDP’s.In Bernhard Nebel, editor, Proc. of the 17th Int. Joint Conference on Artificial Intelligence (IJCAI-01), pages 690–697, San Francisco, CA, August 4–10 2001. Morgan Kaufmann.

    Google Scholar 

  • Broersen, J., Dastani, M., Hulstijn, J., and van der Torre, L. (2002) and. Goal generation in the BOID architecture.Cognitive Science Quarterly, 2(3-4):428–447.

    Google Scholar 

  • Crites, R. H., and Barto, A.G.,Improving elevator performance using reinforcement learning.In Touretzky, D.S., Mozer, M.C., and Hasselmo, M.E., editors, Advances in Neural Information Processing Systems 8, pages 1017–1023, Cambridge MA, 1996. MIT Press.

    Google Scholar 

  • Damasio, A.R.,Descartes’ Error: Emotion, Reason and the Human Brain.Gosset/Putnam, New York, NY, 1994.

    Google Scholar 

  • Dastani, M., de Boer, F., Dignum, F., and Meyer, J-J.Ch. Programming agent deliberation: An approach illustrated using the 3apl language.In Proc. of the Second Int. Conference on Autonomous Agents and Multiagent Systems (AAMAS’03). ACM Press, 2003a.

    Google Scholar 

  • Dastani, M., Dignum, F., and Meyer, J-J.Ch. Autonomy and agent deliberation.In Proc. of The First Int. Workshop on Computatinal Autonomy - Potential, Risks, Solutions (Autonomous 2003), 2003b.

    Google Scholar 

  • Dastani, M., van Riemsdijk, B., Dignum, F., and Meyer, J-J.Ch. A programming language for cognitive agents: Goal-directed 3APL.In Proc. the ProMAS 2003 Workshop at AAMAS’03, 2003c.

    Google Scholar 

  • Dietterich, T.G. Hierarchical reinforcement learning with the MAXQ value function decomposition.Journal of Artificial Intelligence Research, 13:227–303, 2000.

    MATH  MathSciNet  Google Scholar 

  • d’Inverno, M., Kinny, D., Luck, M., and Wooldridge, M. A formal specification of dMARS.In Agent Theories, Architectures, and Languages, pages 155–176, 1997.

    Google Scholar 

  • Dzeroski, S. Relational reinforcement learning for agents in worlds with objects.In Proc. of the Symposium on Adaptive Agents and Multi-Agent Systems (AISB’02), pages 1–8, 2002.

    Google Scholar 

  • Koen V. Hindriks, Frank S. De Boer, Wiebe Van der Hoek, and John-Jules Ch. Meyer. Agent programming in 3apl.Autonomous Agents and Multi-Agent Systems, 2(4):357–401, 1999.

    Google Scholar 

  • Kaelbling, L. P., Oates, T., Hernandez, N., and Finney, S.Learning in worlds with objects.In The AAAI Spring Symposium, March 2001.

    Google Scholar 

  • Littman, M. L., and Boyan, J. A. A distributed reinforcement learning scheme for network routing. In J. Alspector, R. Goodman, and T.X. Brown, editors, Proc. of the First Int. Workshop on Applications of Neural Networks to Telecommunication, pages 45–51, Hillsdale, New Jersey, 1993.

    Google Scholar 

  • Ortony, A., Clore, G. L., and Collins, A.The cognitive structure of emotions.Cambridge University Press, Cambridge, MA, 1988.

    Google Scholar 

  • Parr, R. and Russell, S.Reinforcement learning with hierarchies of machines.In Advances in Neural Information Processing Systems 11,1997.

    Google Scholar 

  • Picard, R. W.Affective Computing.MIT Press, Cambridge, MA, 1997.

    Google Scholar 

  • Rao, A. S. AgentSpeak(L): BDI agents speak out in a logical computable language. In Rudy van Hoe, editor, Seventh European Workshop on Modelling Autonomous Agents in a Multi-Agent World, Eindhoven, The Netherlands, 1996.

    Google Scholar 

  • Rao, A.S. and Georgeff, M. P.Modeling rational agents within a BDI architecture.In Proc. of Second Int. Conference on Knowledge Representation and Reasoning (KR’91), pages 473–484. Morgan Kaufmann, 1991.

    Google Scholar 

  • Rao, A. S. and Georgeff, M. P.BDI agents: from theory to practice.In Proc. of the Int. Conference on Multi-Agent Systems (ICMAS’95), pages 312–319, 1995.

    Google Scholar 

  • Reiter, R.Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. The MIT Press, Cambridge, Massachusetts, 2001.

    MATH  Google Scholar 

  • Sutton, R.S. and Barto, A.G. Reinforcement Learning: an Introduction.The MIT Press, Cambridge, 1998.

    Google Scholar 

  • Sutton, R.S. Precup, D. and Singh, S. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning.Artificial Intelligence, 112(1–2):181–211, 1999.

    Article  MATH  MathSciNet  Google Scholar 

  • Tan, M.Multi-agent reinforcement learning: Independent vs. cooperative agents.In Proc. of the Tenth Int. Conference on Machine Learning, pages 330–337, June 1993.

    Google Scholar 

  • Tesauro, G. Practical issues in temporal difference learning. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4, pages 259–266. San Mateo, CA: Morgan Kaufmann, 1992.

    Google Scholar 

  • van Otterlo, M.Relational expressions in reinforcement learning: Review and open problems.In E. de Jong and T. Oates, editors, Proc. of the ICML’02 Workshop on Development of Representations,2002.

    Google Scholar 

  • van Otterlo, M.Efficient reinforcement learning using relational aggregation.In Proc. of the Sixth European Workshop on Reinforcement Learning, Nancy, France (EWRL-6), 2003.

    Google Scholar 

  • van Otterlo, M.A survey of reinforcement learning in relational domains.Technical Report TR-CTIT-05-31, July 2005, 70 pp, CTIT Technical Report Series, ISSN 1381-3625, Department of Computer Science, University of Twente, 2005.

    Google Scholar 

  • Watkins, C. J. C. H.Learning from Delayed Rewards.PhD thesis, King’s College, Cambridge, England, 1989.

    Google Scholar 

  • Weiss, G.Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence.The MIT Press, Cambridge, Massachusets, 1999.

    Google Scholar 

  • Wiering, M. A.Multi-agent reinforcement learning for traffic light control.In P. Langley, editor, Proc. of the Seventeenth Int. Conference on Machine Learning, pages 1151–1158, 2000.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag London Limited

About this chapter

Cite this chapter

Otterlo, M.v., Dastani, M., Wiering, M., Meyer, JJ. (2008). A Characterization of Sapient Agents. In: Mayorga, R.V., Perlovsky, L.I. (eds) Toward Artificial Sapience. Springer, London. https://doi.org/10.1007/978-1-84628-999-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-84628-999-6_9

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84628-998-9

  • Online ISBN: 978-1-84628-999-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics