Toward Trustworthy Adjustable Autonomy in KAoS

  • Jeffrey M. Bradshaw
  • Hyuckchul Jung
  • Shri Kulkarni
  • Matthew Johnson
  • Paul Feltovich
  • James Allen
  • Larry Bunch
  • Nathanael Chambers
  • Lucian Galescu
  • Renia Jeffers
  • Niranjan Suri
  • William Taysom
  • Andrzej Uszok
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3577)


Trust is arguably the most crucial aspect of agent acceptability. At its simplest level, it can be characterized in terms of judgments that people make concerning three factors: an agent’s competence, its benevolence, and the degree to which it can be rapidly and reliably brought into compliance when things go wrong. Adjustable autonomy consists of the ability to dynamically impose and modify constraints that affect the range of actions that the human-agent team can successfully perform, consistently allowing the highest degrees of useful autonomy while maintaining an acceptable level of trust. Many aspects of adjustable autonomy can be addressed through policy. Policies are a means to dynamically regulate the behavior of system components without changing code or requiring the cooperation of the components being governed. By changing policies, a system can be adjusted to accommodate variations in externally imposed constraints and environmental conditions. In this paper we describe some important dimensions relating to autonomy and give examples of how these dimensions might be adjusted in order to enhance performance of human-agent teams. We introduce Kaa (KAoS adjustable autonomy) and provide a brief comparison with two other implementations of adjustable autonomy concepts.


Computational Autonomy Influence Diagram Authorization Policy Negative Authorization Unmanned System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Acquisti, A., Sierhuis, M., Clancey, W.J., Bradshaw, J.M.: Agent-based modeling of collaboration and work practices onboard the International Space Station. In: Proceedings of the Eleventh Conference on Computer-Generated Forces and Behavior Representation, Orlando, FL (2002)Google Scholar
  2. 2.
    Allen, J.F., Byron, D.K., Dzikovska, M., Ferguson, G., Galescu, L., Stent, A.: Towards conversational human-computer interaction. AI Magazine 22(4), 27–35 (2001)Google Scholar
  3. 3.
    Barwise, J., Perry, J.: Situations and Attitudes. MIT Press, Cambridge (1983)zbMATHGoogle Scholar
  4. 4.
    Boella, G.: Obligations and cooperation: Two sides of social rationality. In: Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, pp. 57–78. Kluwer, Dordrecht (2002)Google Scholar
  5. 5.
    Bradshaw, J.M., Acquisti, A., Allen, J., Breedy, M.R., Bunch, L., Chambers, N., Feltovich, P., Galescu, L., Goodrich, M.A., Jeffers, R., Johnson, M., Jung, H., Lott, J., Olsen Jr., D.R., Sierhuis, M., Suri, N., Taysom, W., Tonti, G., Uszok, A.: Teamwork-centered autonomy for extended human-agent interaction in space applications. In: AAAI 2004 Spring Symposium. Stanford University, AAAI Press, Menlo Park (2004)Google Scholar
  6. 6.
    Bradshaw, J.M., Beautement, P., Breedy, M.R., Bunch, L., Drakunov, S.V., Feltovich, P.J., Hoffman, R.R., Jeffers, R., Johnson, M., Kulkarni, S., Lott, J., Raj, A., Suri, N., Uszok, A.: Making agents acceptable to people. In: Zhong, N., Liu, J. (eds.) Intelligent Technologies for Information Analysis: Advances in Agents, Data Mining, and Statistical Learning, pp. 361–400. Springer, Berlin (2004)CrossRefGoogle Scholar
  7. 7.
    Bradshaw, J.M., Boose, J.H.: Decision analysis techniques for knowledge acquisition: Combining information and preferences using Aquinas and Axotl. International Journal of Man-Machine Studies 32(2), 121–186 (1990)CrossRefGoogle Scholar
  8. 8.
    Bradshaw, J.M., Covington, S.P., Russo, P.J., Boose, J.H.: Knowledge acquisition for intelligent decision systems: integrating Aquinas and Axotl in DDUCKS. In: Henrion, M., Shachter, R., Kanal, L.N., Lemmer, J. (eds.) Uncertainty in Artificial Intelligence, pp. 255–270. Elsevier, Amsterdam (1990)CrossRefGoogle Scholar
  9. 9.
    Bradshaw, J.M., Feltovich, P., Jung, H., Kulkarni, S., Taysom, W., Uszok, A.: Dimensions of adjustable autonomy and mixed-initiative interaction. In: Nickles, M., Rovatsos, M., Weiss, G. (eds.) AUTONOMY 2003. LNCS (LNAI), vol. 2969, pp. 17–39. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Bradshaw, J.M., Feltovich, P.J., Jung, H., Kulkarni, S., Allen, J., Bunch, L., Chambers, N., Galescu, L., Jeffers, R., Johnson, M., Sierhuis, M., Taysom, W., Uszok, A., Van Hoof, R.: Policy-based coordination in joint human-agent activity. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, The Hague, Netherlands (2004)Google Scholar
  11. 11.
    Bradshaw, J.M., Greaves, M., Holmback, H., Jansen, W., Karygiannis, T., Silverman, B., Suri, N., Wong, A.: Agents for the masses: Is it possible to make development of sophisticated agents simple enough to be practical? IEEE Intelligent Systems (March-April), 53–63 (1999)Google Scholar
  12. 12.
    Bradshaw, J.M., Sierhuis, M., Acquisti, A., Feltovich, P., Hoffman, R., Jeffers, R., Prescott, D., Suri, N., Uszok, A., Van Hoof, R.: Adjustable autonomy and human-agent teamwork in practice: An interim report on space applications. In: Hexmoor, H., Falcone, R., Castelfranchi, C. (eds.) Agent Autonomy, pp. 243–280. Kluwer, Dordrecht (2003)CrossRefGoogle Scholar
  13. 13.
    Bradshaw, J.M., Suri, N., Breedy, M.R., Canas, A., Davis, R., Ford, K.M., Hoffman, R., Jeffers, R., Kulkarni, S., Lott, J., Reichherzer, T., Uszok, A.: Terraforming cyberspace. In: Marinescu, D.C., Lee, C. (eds.) Process Coordination and Ubiquitous Computing, pp. 165–185. CRC Press, Boca Raton (2002); Updated and expanded version of an article that originally appeared in IEEE Intelligent Systems, pp. 49-56 (July 2001)Google Scholar
  14. 14.
    Brown, B.L., Bradshaw, J.M.: A psychology of vocal patterns. In: Giles, H., Clair, R.N.S. (eds.) Language and the Paradigms of Social Psychology, Lawernce Erlbaum, Hillsdale (1985)Google Scholar
  15. 15.
    Bunch, L., Breedy, M.R., Bradshaw, J.M.: Software agents for process monitoring and notification. In: Proceedings of AIMS 2004 (2004)Google Scholar
  16. 16.
    Chambers, N., Allen, J., Galescu, L.: A dialogue-based approach to multi-robot team control. In: Proceedings of Third International Naval Research Labs Multi-Robot Systems Workshop, Washington, D.C. (2005)Google Scholar
  17. 17.
    Cohen, P.R., Levesque, H.J.: Teamwork. Technote 504. Menlo Park, CA: SRI International (March 1991)Google Scholar
  18. 18.
    Cohen, R., Fleming, M.: Adjusting the autonomy in mixed-initiative systems by reasoning about interaction. In: Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, pp. 105–122. Kluwer, Dordrecht (2002)Google Scholar
  19. 19.
    Damianou, N., Dulay, N., Lupu, E.C., Sloman, M.S.: Ponder: A Language for Specifying Security and Management Policies for Distributed Systems, Version 2.3. Imperial College of Science, Technology and Medicine, Department of Computing, 20 October (2000)Google Scholar
  20. 20.
    Devlin, K.: Logic and Information. Cambridge University Press, Cambridge (1991)zbMATHGoogle Scholar
  21. 21.
    Falcone, R., Barber, S., Korba, L., Singh, M. (eds.): AAMAS 2002. LNCS, vol. 2631. Springer, Heidelberg (2003)zbMATHGoogle Scholar
  22. 22.
    Falcone, R., Castelfranchi, C.: Adjustable social autonomy (2002)Google Scholar
  23. 23.
    Falcone, R., Castelfranchi, C.: From automaticity to autonomy: The frontier of artificial agents. In: Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, pp. 79–103. Kluwer, Dordrecht (2002)Google Scholar
  24. 24.
    Feltovich, P., Bradshaw, J.M., Jeffers, R., Suri, N., Uszok, A.: Social order and adaptability in animal and human cultures as an analogue for agent communities: Toward a policy-based approach. In: Omicini, A., Petta, P., Pitt, J. (eds.) ESAW 2003. LNCS, vol. 3071, pp. 21–48. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  25. 25.
    Ferguson, G., Allen, J.: TRIPS: An integrated intelligent problem-solving assistant. In: Proceedings of the National Conference on Artificial Intelligence (AAAI 1998), Madison, WI (1998)Google Scholar
  26. 26.
    Gawdiak, Y., Bradshaw, J.M., Williams, B., Thomas, H.: R2D2 in a softball: The Personal Satellite Assistant. In: Lieberman, H. (ed.) Proceedings of the ACM Conference on Intelligent User Interfaces (IUI 2000), New Orleans, LA, New York, pp. 125–128. ACM Press, New York (2000)Google Scholar
  27. 27.
    Gibson, J.J.: The Ecological Approach to Visual Perception. Houghton Mifflin, Boston (1979)Google Scholar
  28. 28.
    Guinn, C.I.: Evaluating mixed-initiative dialog. IEEE Intelligent Systems, 21–23 (September-October 1999)Google Scholar
  29. 29.
    Hancock, P.A., Scallen, S.F.: Allocating functions in human-machine systems. In: Hoffman, R., Sherrick, M.F., Warm, J.S. (eds.) Viewing Psychology as a Whole, pp. 509–540. American Psychological Association, Washington (1998)Google Scholar
  30. 30.
    Hexmoor, H., Falcone, R., Castelfranchi, C. (eds.): Agent Autonomy. Kluwer, Dordrecht (2003)zbMATHGoogle Scholar
  31. 31.
    Horvitz, E.: Principles of mixed-initiative user interfaces. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 1999). ACM Press, New York (1999)Google Scholar
  32. 32.
    Horvitz, E., Jacobs, A., Hovel, D.: Attention-sensitive alerting. In: Proceedings of the Conference on Uncertainty and Artificial Intelligence (UAI 1999), Stockholm, Sweden, pp. 305–313 (1999)Google Scholar
  33. 33.
    Howard, R.A., Matheson, J.E.: Influence diagrams. In: Howard, R.A., Matheson, J.E. (eds.) Readings on the Principles and Applications of Decision Analysis, pp. 719–762. Strategic Decisions Group, Menlo Park (1984)Google Scholar
  34. 34.
    Johnson, M., Chang, P., Jeffers, R., Bradshaw, J.M., Soo, V.-W., Breedy, M.R., Bunch, L., Kulkarni, S., Lott, J., Suri, N., Uszok, A.: KAoS semantic policy and domain services: An application of DAML to Web services-based grid architectures. In: Proceedings of the AAMAS 2003 Workshop on Web Services and Agent-Based Engineering, Melbourne, Australia (2003)Google Scholar
  35. 35.
    Kahn, M., Cicalese, C.: CoABS Grid Scalability Experiments. In: Rana, O.F. (ed.) Second International Workshop on Infrastructure for Scalable Multi-Agent Systems at the Fifth International Conference on Autonomous Agents. ACM Press, New York (2001)Google Scholar
  36. 36.
    Kay, A.: User interface: A personal view. In: Laurel, B. (ed.) The Art of Human-Computer Interface Design, pp. 191–208. Addison-Wesley, Reading (1990)Google Scholar
  37. 37.
    Klein, G., Feltovich, P.J., Bradshaw, J.M., Woods, D.D.: Common ground and coordination in joint activity. In: Rouse, W.B., Boff, K.R. (eds.) Organizational Simulation. John Wiley, New York City (2004) (in press)Google Scholar
  38. 38.
    Klein, G., Woods, D.D., Bradshaw, J.M., Hoffman, R., Feltovich, P.: Ten challenges for making automation a "team player" in joint human-agent activity. IEEE Intelligent Systems 19(6), 91–95 (2004)CrossRefGoogle Scholar
  39. 39.
    Klusch, M., Weiss, G., Rovatsos, M. (eds.): Computational Autonomy. Springer, Berlin (2004)zbMATHGoogle Scholar
  40. 40.
    Lott, J., Bradshaw, J.M., Uszok, A., Jeffers, R.: Using KAoS policy and domain services within Cougaar. In: Proceedings of the Open Cougaar Conference 2004, New York City, NY, pp. 89–95 (2004)Google Scholar
  41. 41.
    Maheswaran, R.T., Tambe, M., Varakantham, P., Myers, K.: Adjustable autonomy challenges in personal assistant agents: A position paper. In: Klusch, M., Weiss, G., Rovatsos, M. (eds.) Computational Autonomy. Springer, Berlin (2004) (in press)Google Scholar
  42. 42.
    Myers, K., Morley, D.: Directing agents. In: Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, pp. 143–162. Kluwer, Dordrecht (2003)Google Scholar
  43. 43.
    Norman, D.A.: The Psychology of Everyday Things. Basic Books, New York (1988)Google Scholar
  44. 44.
    Norman, D.A.: How might people interact with agents? In: Bradshaw, J.M. (ed.) Software Agents, pp. 49–55. The AAAI Press/The MIT Press, Cambridge (1997)Google Scholar
  45. 45.
    Norman, D.A.: Affordance, conventions, and design. Interactions, 38-43 (May 1999)Google Scholar
  46. 46.
    Perrow, C.: Normal Accidents: Living with High-Risk Technologies. Basic Books, New York (1984)Google Scholar
  47. 47.
    Scerri, P., Pynadath, D., Tambe, M.: Adjustable autonomy for the real world. In: Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, pp. 163–190. Kluwer, Dordrecht (2002)Google Scholar
  48. 48.
    Sierhuis, M., Bradshaw, J.M., Acquisti, A., Van Hoof, R., Jeffers, R., Uszok, A.: Human-agent teamwork and adjustable autonomy in practice. In: Proceedings of the Seventh International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS), Nara, Japan (2003)Google Scholar
  49. 49.
    Suri, N., Bradshaw, J.M., Breedy, M.R., Groth, P.T., Hill, G.A., Jeffers, R., Mitrovich, T.R., Pouliot, B.R., Smith, D.S.: NOMADS: Toward an environment for strong and safe agent mobility. In: Proceedings of Autonomous Agents 2000, ACM Press, New York (2000)Google Scholar
  50. 50.
    Suri, N., Bradshaw, J.M., Carvalho, M., Breedy, M.R., Cowin, T.B., Saavendra, R., Kulkarni, S.: Applying agile computing to support efficient and policy-controlled sensor information feeds in the Army Future Combat Systems environment. In: Proceedings of the Annual U.S. Army Collaborative Technology Alliance (CTA) Symposium (2003)Google Scholar
  51. 51.
    Suri, N., Carvalho, M., Bradshaw, J.M.: Proactive resource management for agile computing. In: Bryce, C., Czaijkowski, G. (eds.) Proceedings of the Tenth Annual ECOOP Workshop on Mobile Object Systems and Resource-Aware Computing, Oslo, Norway (2004)Google Scholar
  52. 52.
    Suri, N., Carvalho, M., Bradshaw, J.M., Breedy, M.R., Cowin, T.B., Groth, P.T., Saavendra, R., Uszok, A.: Mobile code for policy enforcement. In: Policy 2003, Como, Italy (2003)Google Scholar
  53. 53.
    Tambe, M., Shen, W., Mataric, M., Pynadath, D.V., Goldberg, D., Modi, P.J., Qiu, Z., Salemi, B.: Teamwork in cyberspace: Using TEAMCORE to make agents team-ready. In: Proceedings of the AAAI Spring Symposium on Agents in Cyberspace. The AAAI Press, Menlo Park (1999)Google Scholar
  54. 54.
    Tonti, G., Bradshaw, J.M., Jeffers, R., Montanari, R., Suri, N., Uszok, A.: Semantic Web languages for policy representation and reasoning: A comparison of KAoS, Rei, and Ponder. In: Fensel, D., Sycara, K., Mylopoulos, J. (eds.) ISWC 2003, vol. 2870, pp. 419–437. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  55. 55.
    Uszok, A., Bradshaw, J.M., Jeffers, R., Johnson, M., Tate, A., Dalton, J., Aitken, S.: Policy and contract management for semantic web services. In: AAAI 2004 Spring Symposium Workshop on Knowledge Representation and Ontology for Autonomous Systems. Stanford University, AAAI Press, CA (2004)Google Scholar
  56. 56.
    Uszok, A., Bradshaw, J.M., Jeffers, R., Suri, N., Hayes, P., Breedy, M.R., Bunch, L., Johnson, M., Kulkarni, S., Lott, J.: KAoS policy and domain services: Toward a description-logic approach to policy representation, deconfliction, and enforcement. In: Proceedings of Policy 2003, Como, Italy (2003)Google Scholar
  57. 57.
    Uszok, A., Bradshaw, J.M., Johnson, M., Jeffers, R., Tate, A., Dalton, J., Aitken, S.: KAoS policy management for semantic web services. IEEE Intelligent Systems 19(4), 32–41 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jeffrey M. Bradshaw
    • 1
  • Hyuckchul Jung
    • 1
  • Shri Kulkarni
    • 1
  • Matthew Johnson
    • 1
  • Paul Feltovich
    • 1
  • James Allen
    • 1
  • Larry Bunch
    • 1
  • Nathanael Chambers
    • 1
  • Lucian Galescu
    • 1
  • Renia Jeffers
    • 1
  • Niranjan Suri
    • 1
  • William Taysom
    • 1
  • Andrzej Uszok
    • 1
  1. 1.Institute for Human and Machine Cognition (IHMC)PensacolaUSA

Personalised recommendations