Autonomous Agents and Multi-Agent Systems

, Volume 13, Issue 3, pp 327–354 | Cite as

Cobot in LambdaMOO: An Adaptive Social Statistics Agent

  • Charles Lee IsbellJr.Email author
  • Michael Kearns
  • Satinder Singh
  • Christian R. Shelton
  • Peter Stone
  • Dave Kormann


We describe our development of Cobot, a novel software agent who lives in LambdaMOO, a popular virtual world frequented by hundreds of users. Cobot’s goal was to become an actual part of that community. Here, we present a detailed discussion of the functionality that made him one of the objects most frequently interacted with in LambdaMOO, human or artificial. Cobot’s fundamental power is that he has the ability to collect social statistics summarizing the quantity and quality of interpersonal interactions. Initially, Cobot acted as little more than a reporter of this information; however, as he collected more and more data, he was able to use these statistics as models that allowed him to modify his own behavior. In particular, cobot is able to use this data to “self-program,” learning the proper way to respond to the actions of individual users, by observing how others interact with one another. Further, Cobot uses reinforcement learning to proactively take action in this complex social environment, and adapts his behavior based on multiple sources of human reward. Cobot represents a unique experiment in building adaptive agents who must live in and navigate social spaces.


Reinforcement learning Social modeling Chat agents Autonomous agents Game theory Believable agents 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barbuceanu, M., Fox, M.S. (1995). The architecture of an agent building shell. In M. Wooldridge, J. P. Mueller & M. Tambe (Eds), Intelligent agents II: Agent theories, architectures and languages, Vol. 1037 of Lecture Notes in Artificial Intelligence (pp. 235–250). Springer Verlag.Google Scholar
  2. 2.
    Bates J. (1994). The role of emotion in believable agents. Communications of the ACM 37(7): 122–125.CrossRefGoogle Scholar
  3. 3.
    Brooks R.A. (1986). A robust layered control system for a mobile robot. IEEE Journal of robotics and automation 1(1): 14–23.Google Scholar
  4. 4.
    Brooks R.A. (1991). Intelligence without representation. Artificial Intelligence 47: 139–159.CrossRefGoogle Scholar
  5. 5.
    Cherny L. (1999). Conversation and Community: Discourse in a Social MUD, CLFI Publications.Google Scholar
  6. 6.
    Curtis, P. (1997). The lambdaMOO programmer’s manual v1.8.0p6, _toc.html.Google Scholar
  7. 7.
    Dibbell J. (1999). My Tiny Life: Crime and Passion in a Virtual World. Henry & Company, Holt.Google Scholar
  8. 8.
    Etzioni O. (1993). Intelligence without robots: a reply to brooks. AI Magazine 14(4): 7–13.Google Scholar
  9. 9.
    Etzioni O. (1994). A softbot-based interface to internet. Communications of the ACM 37(7): 72–76.CrossRefGoogle Scholar
  10. 10.
    Etzioni O. (1997). Moving up the information food chain: deploying softbots on the world wide web. AI Magazine 18(2): 11–18.Google Scholar
  11. 11.
    Etzioni, O., Golder, K., & Weld, D. (1994). Tractable closed-world reasoning with updates, In Proceedings of KR-94.Google Scholar
  12. 12.
    Finin T., Labrou Y. and Mayfield J. (1997). KQML as an agent communication language. In: Bradshaw, J. (eds) Software Agents, pp 291–316. MIT Press, Cambridge.Google Scholar
  13. 13.
    Foner L. (1993). What’s an agent, anyway? a sociological case study. Technical report, MIT Media Lab.Google Scholar
  14. 14.
    Foner, L. (1997). Entertaining agents: a sociological case study, In Proceedings of the first international conference on autonomous agents.Google Scholar
  15. 15.
    Genesereth M.R. and Ketchpel S. (1994). Software agents. Communications of the ACM 37(7): 48–53.CrossRefGoogle Scholar
  16. 16.
    Isbell, C. L., Kearns, M., Kormann, D., Singh, S., & Stone, P. (2000). Cobot in LambdaMOO: A Social Statistics Agent, Proceedings of AAAI-2000.Google Scholar
  17. 17.
    Isbell, C. L., Shelton, C., Kearns, M., Singh, S., & Stone, P. (2001). A social reinforcement learning agent, Proceedings of Agents.Google Scholar
  18. 18.
    Jennings N., Sycara K. and Wooldridge M. (1999). A roadmap of agent research and development. Autonomous Agents and Multi-Agent Systems 1(1): 7–38.CrossRefGoogle Scholar
  19. 19.
    Lesperance, Y., Levesque, H., Lin, F., Marcu, D., Reiter, R., & Scherl, R. (1995). Foundations of a logical approach to agent programming, Intelligent Agents II, 331–346.Google Scholar
  20. 20.
    Lesser V.R. (1998). Reflections on the nature of multi-agent coordination and its implications for an agent architecture. Autonomous Agents and Multi-Agent Systems 1(1): 89–112.CrossRefGoogle Scholar
  21. 21.
    Maes P. (1994). Agents that reduce work and information overload. Communications of the ACM 37(7): 30–40.CrossRefGoogle Scholar
  22. 22.
    Mauldin, F. (1994). Chatterbots, TinyMUDs, and the Turing Test: Entering the Loebner Prize Competition, In Proceedings of the twelfth national conference on artificial intelligence.Google Scholar
  23. 23.
    McCabe, F. G., & Clark, K. L. (1995). April – agent process interaction language, Intelligent Agents: theories, architectures, and languages.Google Scholar
  24. 24.
    Mitchell T., Caruana R., Freitag D., McDermott J. and Zabowski D. (1994). Experience with a learning personal assistant. Communications of the ACM 37(7): 80–91.CrossRefGoogle Scholar
  25. 25.
    Nwana H., Ndumu D., Lee L. and Collis J. (1999). Zeus: A tool-kit for building distributed multi-agent systems. Applied Artifical Intelligence Journal 13(1): 129–186.CrossRefGoogle Scholar
  26. 26.
    Shelton, C. R. (2000). Balancing multiple sources of reward in reinforcement learning, Advances in Neural Information Processing Systems, pp. 1082–1088.Google Scholar
  27. 27.
    Singh, S., Kearns, M., Littman, D., & Walker, M. (2000). Empirical evaluation of a reinforcement learning dialogue system, In Proceedings of the seventeenth national conference on Artificial intelligence (AAAI), pp. 645–651.Google Scholar
  28. 28.
    Sutton R.S. and Barto A.G. (1998). Reinforcement Learning: An Introduction. MIT Press, Cambridge MA.Google Scholar
  29. 29.
    Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (1999). Policy gradient methods for reinforcement learning with function approximation., In Neural Information Processing Systems-1999.Google Scholar
  30. 30.
    Sycara K., Pannu A., Williamson M. and Zeng D. (1996). Distributed intelligent agents. IEE Expert 11(6): 36–46.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  • Charles Lee IsbellJr.
    • 1
    Email author
  • Michael Kearns
    • 2
  • Satinder Singh
    • 3
  • Christian R. Shelton
    • 4
  • Peter Stone
    • 5
  • Dave Kormann
    • 6
  1. 1.College of ComputingGeorgia Institute of TechnologyAtlantaUSA
  2. 2.Department of Computer and Information ScienceUniversity of PennsylvaniaPhiladelphiaUSA
  3. 3.Department of Computer Science and EngineeringUniversity of MichiganAnn ArborUSA
  4. 4.Computer Science and Engineering DepartmentUniversity of California at RiversideRiversideUSA
  5. 5.Department of Computer SciencesUniversity of Texas at AustinAustinUSA
  6. 6.AT&T Shannon LabsFlorham ParkUSA

Personalised recommendations