Electric Elves

Adjustable Autonomy in Real-World Multi-Agent Environments
  • David V. Pynadath
  • Milind Tambe
Part of the Multiagent Systems, Artificial Societies, and Simulated Organizations book series (MASA, volume 3)

Abstract

Through adjustable autonomy (AA), an agent can dynamically vary the degree to which it acts autonomously, allowing it to exploit human abilities to improve its performance, but without becoming overly dependent and intrusive. AA research is critical for successful deployment of agents to support important human activities. While most previous work has focused on individual agent-human interactions, this paper focuses on teams of agents operating in real-world human organizations, as well as the novel AA coordination challenge that arises when one agent’s inaction while waiting for a human response can lead to potential miscoordination. Our multi-agent AA framework, based on Markov decision processes, provides an adaptive model of users that reasons about the uncertainty, costs, and constraints of decisions. Our approach to AA has proven essential to the success of our deployed Electric Elves system that assists our research group in rescheduling meetings, choosing presenters, tracking people’s locations, and ordering meals.

Keywords

Markov Decision Process Intelligent Agent Reward Function Meeting Location Agent Team 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Chalupsky, H., Gil, Y., Knoblock, C. A., Lerman, K., Oh, J., Pynadath, D. V., Russ, T. A., and Tambe, M. Electric elves: Applying agent technology to support human organizations. In Proc. of the IAAI. Conf., 2001.Google Scholar
  2. [2]
    Collins, J., Bilot, C., Gini, M., and Mobasher, B. Mixed-init. dec.-supp. in agent-based auto. contracting. In Proc. of the Conf. on Auto. Agents, 2000.Google Scholar
  3. [3]
    Dorais, G. A., Bonasso, R. P., Kortenkamp, D., Pell, B., and Schreckenghost, D. Adjustable autonomy for human-centered autonomous systems on mars. In Proc. of the Intn’l Conf. of the Mars Soc., 1998.Google Scholar
  4. [4]
    Ferguson, G., Allen, J., and Miller, B. TRAINS-95: Towards a mixed init. plann. asst. In Proc. of the Conf. on Art. Intell. Plann. Sys., pp. 70–77.Google Scholar
  5. [5]
    Horvitz, E., Jacobs, A., and Hovel, D. Attention-sensitive alerting. In Proc. of the Conf. on Uncertainty and Art. Intell., pp. 305–313, 1999.Google Scholar
  6. [6]
    Lesser, V., Atighetchi, M., Benyo, B., Horling, B., Raja, A., Vincent, R., Wagner, T., Xuan, P., and Zhang, S. X. A multi-agent system for intelligent environment control. In Proc. of the Conf. on Auto. Agents, 1994.Google Scholar
  7. [7]
    Mitchell, T., Caruana, R., Freitag, D., McDermott, J., and Zabowski, D. Exp. with a learning personal asst. Comm. of the ACM, 37(7):81–91, 1994.CrossRefGoogle Scholar
  8. [8]
    Puterman, M. L. Markov Decision Processes. John Wiley & Sons, 1994.Google Scholar
  9. [9]
    Quinlan, J. R. C4.5: Progs. for Mach. Learn. Morgan Kaufmann, 1993.Google Scholar
  10. [10]
    Scerri, P., Pynadath, D. V., and Tambe, M. Adjustable autonomy in real-world multi-agent environments. In Proc. of the Conf. on Auto. Agents, 2001.Google Scholar
  11. [11]
    Tambe, M., Pynadath, D. V., Chauvat, N., Das, A., and Kaminka, G. A. Adaptive agent integration architectures for heterogeneous team members. In Proc. of the Intn’l Conf. on Multi Agent Sys., pp. 301–308, 2000.Google Scholar
  12. [12]
    Tollmar, K., Sandor, O., and Schomer, A. Supp. soc. awareness: @Work design & experience. In Proc. of the ACM Conf. on CSCW, pp. 298–307, 1996.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • David V. Pynadath
    • 1
  • Milind Tambe
    • 1
  1. 1.University of Southern California Information Sciences InstituteUSA

Personalised recommendations