Advertisement

Do You Get It? User-Evaluated Explainable BDI Agents

  • Joost Broekens
  • Maaike Harbers
  • Koen Hindriks
  • Karel van den Bosch
  • Catholijn Jonker
  • John-Jules Meyer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6251)

Abstract

In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDI-based agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the actions themselves. Finally, we hypothesize guidelines relevant for the development of explainable BDI agents.

Keywords

Action Type Agent Behavior Explanation Type Agent Program Mental Concept 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cortellessa, G., Cesta, A.: Evaluating mixed-initiative systems: An experimental approach. In: ICAPS’06, pp. 172–181 (2006)Google Scholar
  2. 2.
    Gilbert, N.: Explanation and dialogue. The Knowledge Engineering Review 4(03), 235–247 (1989) 10.1017/S026988890000504XCrossRefGoogle Scholar
  3. 3.
    Core, M., Traum, T., Lane, H., Swartout, W., Gratch, J., Van Lent, M.: Teaching negotiation skills through practice and reflection with virtual humans. Simulation 82(11), 685–701 (2006)CrossRefGoogle Scholar
  4. 4.
    Graesser, A.C., Chipman, P., Haynes, B.C., Olney, A.: Autotutor: an intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education 48(4), 612–618 (2005)CrossRefGoogle Scholar
  5. 5.
    Broekens, J., DeGroot, D.: Formalizing cognitive appraisal: from theory to computation. In: Trapple, R. (ed.) Cybernetics and Systems 2006, Vienna, Austrian, Society for Cybernetics Studies, pp. 595–600 (2006)Google Scholar
  6. 6.
    Cavazza, M., Charles, F., Mead, S.J.: Character-based interactive storytelling. IEEE Intelligent Systems 17(4), 17–24 (2002)CrossRefzbMATHGoogle Scholar
  7. 7.
    Theune, M., Faas, S., Heylen, D.K.J., Nijholt, A.: The virtual qstoryteller: Story creation by intelligent agents. In: TIDSE 2003: Technologies for Interactive Digital Storytelling and Entertainment, Darmstadt, pp. 204–215. Fraunhofer IRB Verlag (2003)Google Scholar
  8. 8.
    Keil, F.: Explanation and understanding. Annual Reviews Psychology 57, 227–254 (2006)CrossRefGoogle Scholar
  9. 9.
    Harbers, M., Van den Bosch, K., Meyer, J.: A study into preferred explanations of virtual agent behavior. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 132–145. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  10. 10.
    Johnson, W.: Agents that learn to explain themselves. In: Proc. of the 12th Nat. Conf. on Artificial Intelligence, pp. 1257–1263 (1994)Google Scholar
  11. 11.
    Van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: Proc. of IAAA 2004. AAAI Press, Menlo Park (2004)Google Scholar
  12. 12.
    Gomboc, D., Solomon, S., Core, M.G., Lane, H.C., van Lent, M.: Design recommendations to support automated explanation and tutoring. In: Proc. of BRIMS 2005, Universal City, CA (2005)Google Scholar
  13. 13.
    Core, M., Lane, H., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.: Building explainable artificial intelligence systems. In: AAAI (2006)Google Scholar
  14. 14.
    Hindriks, K.: Programming Rational Agents in GOAL. In: Multi-Agent Programming: Languages, Tools and Applications, pp. 119–157. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  15. 15.
    Schraagen, J., Chipman, S., Shalin, V. (eds.): Cognitive Task Analysis. Lawrence Erlbaum Associates, Mahway (2000)Google Scholar
  16. 16.
    Malle, B.: How people explain behavior: A new theoretical framework. Personality and Social Psychology Review 3(1), 23–48 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Joost Broekens
    • 1
  • Maaike Harbers
    • 2
  • Koen Hindriks
    • 1
  • Karel van den Bosch
    • 3
  • Catholijn Jonker
    • 1
  • John-Jules Meyer
    • 2
  1. 1.Delft University of TechnologyNetherlands
  2. 2.Utrecht UniversityNetherlands
  3. 3.TNO Institute of Defence, Security and SafetyThe Netherlands

Personalised recommendations