Advertisement

Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance

  • Mark A. Neerincx
  • Jasper van der Waa
  • Frank Kaptein
  • Jurriaan van Diggelen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10906)

Abstract

Most explainable AI (XAI) research projects focus on well-delineated topics, such as interpretability of machine learning outcomes, knowledge sharing in a multi-agent system or human trust in agent’s performance. For the development of explanations in human-agent teams, a more integrative approach is needed. This paper proposes a perceptual-cognitive explanation (PeCoX) framework for the development of explanations that address both the perceptual and cognitive foundations of an agent’s behavior, distinguishing between explanation generation, communication and reception. It is a generic framework (i.e., the core is domain-agnostic and the perceptual layer is model-agnostic), and being developed and tested in the domains of transport, health-care and defense. The perceptual level entails the provision of an Intuitive Confidence Measure and the identification of the “foil” in a contrastive explanation. The cognitive level entails the selection of the beliefs, goals and emotions for explanations. Ontology Design Patterns are being constructed for the reasoning and communication, whereas Interaction Design Patterns are being constructed for the shaping of the multimodal communication. First results show (1) positive effects on human’s understanding of the perceptual and cognitive foundation of agent’s behavior, and (2) the need for harmonizing the explanations to the context and human’s information processing capabilities.

Keywords

Explainable AI Human-agent teamwork Cognitive engineering Ontologies Design patterns 

Notes

Acknowledgements

This research is supported by the European PAL project (Horizon2020 grant nr. 643783-RIA), and the TNO seed Early Research Program “Applied AI”.

References

  1. 1.
    Bradshaw, J.M., et al.: From tools to teammates: joint activity in human-agent-robot teams. In: Kurosu, M. (ed.) 2009 Proceedings of the HCD 2009. LNCS, vol. 5619, pp. 935–944. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-02806-9_107CrossRefGoogle Scholar
  2. 2.
    Beller, J., Heesen, M., Vollrath, M.: Improving the driver–automation interaction: an approach using automation uncertainty. Hum. Factors 55(6), 1130–1141 (2013)CrossRefGoogle Scholar
  3. 3.
    Broekens, J., Harbers, M., Hindriks, K., van den Bosch, K., Jonker, C., Meyer, J.-J.: Do you get it? User-evaluated explainable BDI agents. In: Dix, J., Witteveen, C. (eds.) MATES 2010. LNCS (LNAI), vol. 6251, pp. 28–39. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-16178-0_5CrossRefGoogle Scholar
  4. 4.
    Churchland, P.M.: Folk psychology and the explanation of human behavior. In: Greenwood, J. (ed.) The Future of Folk Psychology: Intentionality and Cognitive Science. Cambridge University Press, Cambridge (1991)Google Scholar
  5. 5.
    Dennett, D.C.: Three kinds of intentional psychology. In: Healey, R. (ed.) Reduction, Time and Reality. Cambridge University Press, Cambridge (1981)Google Scholar
  6. 6.
    van Diggelen, J., van den Broek, H., Schraagen, J.M., van der Waa, J.: An intelligent operator support system for dynamic positioning. In: Fechtelkotter, P., Legatt, M. (eds.) AHFE 2017. AISC, vol. 599, pp. 48–59. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-60204-2_6CrossRefGoogle Scholar
  7. 7.
    Döring, S.A.: Explaining action by emotion. Philos. Q. 53, 214–230 (2003)CrossRefGoogle Scholar
  8. 8.
    Gangemi, A., Presutti, V.: Ontology design patterns. In: Staab, S., Studer, R. (eds.) Handbook on Ontologies. IHIS, pp. 221–243. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-540-92673-3_10CrossRefGoogle Scholar
  9. 9.
    Harbers, M., Broekens, J., van den Bosch, K., Meyer, J.J.: Guidelines for developing explainable cognitive models. In: Proceedings of ICCM, pp. 85–90, January 2010Google Scholar
  10. 10.
    Harbers, M., van den Bosch, K., Meyer, J.J.: Design and evaluation of explainable BDI agents. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), vol. 2, pp. 125–132. IEEE (2010)Google Scholar
  11. 11.
    Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM (2017)Google Scholar
  12. 12.
    Haynes, S.R., Cohen, M.A., Ritter, F.E.: Designs for explaining intelligent agents. Int. J. Hum Comput Stud. 67(1), 90–110 (2009)CrossRefGoogle Scholar
  13. 13.
    Kaptein, F., Broekens, J., Hindriks, K.V., Neerincx, M.: CAAF: a cognitive affective agent programming framework. In: Traum, D., Swartout, W., Khooshabeh, P., Kopp, S., Scherer, S., Leuski, A. (eds.) IVA 2016. LNCS (LNAI), vol. 10011, pp. 317–330. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-47665-0_28CrossRefGoogle Scholar
  14. 14.
    Kaptein, F., Broekens, D.J., Hindriks, K.V., Neerincx, M.A.: Personalised self-explanation by robots: the role of goals versus beliefs in robot-action explanation for children and adults. In: RO-MAN 2017 (2017)Google Scholar
  15. 15.
    Kaptein, F., Broekens, D.J., Hindriks, K.V., Neerincx, M.A.: The role of emotion in self-explanation by cognitive agents. In: DFAI Workshop at ACII 2017 (2017)Google Scholar
  16. 16.
    Keil, F.C.: Explanation and understanding. Annu. Rev. Psychol. 57, 227–254 (2006)CrossRefGoogle Scholar
  17. 17.
    Klein, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004)CrossRefGoogle Scholar
  18. 18.
    Lohani, M., Stokes, C., Dashan, N., McCoy, M., Bailey, C.A., Rivers, S.E.: A framework for human-agent social systems: the role of non-technical factors in operation success. In: Savage-Knepshield, P., Chen, J. (eds.) Advances in Human Factors in Robots and Unmanned Systems. AISC, vol. 499, pp. 137–148. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-41959-6_12CrossRefGoogle Scholar
  19. 19.
    Malle, B.F.: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. MIT Press, Cambridge (2004)Google Scholar
  20. 20.
    Miller, T.: Explanation in artificial intelligence: insights from the social sciences. arXiv preprint (2017). arXiv:1706.07269
  21. 21.
    Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI 2017 Workshop on Explainable AI (XAI), p. 36 (2017)Google Scholar
  22. 22.
    Narayanan, M., Chen, E., He, J., Kim, B., Gersham, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv preprint (2018). arXiv:1802.00682v1
  23. 23.
    Neerincx, M.A., van Diggelen, J., van Breda, L.: Interaction design patterns for adaptive human-agent-robot teamwork in high-risk domains. In: Harris, D. (ed.) EPCE 2016. LNCS (LNAI), vol. 9736, pp. 211–220. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40030-3_22CrossRefGoogle Scholar
  24. 24.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, August 2016Google Scholar
  25. 25.
    Scheutz, M., DeLoach, S.A., Adams, J.A.: A framework for developing and using shared mental models in human-agent teams. J. Cogn. Eng. Decis. Making 11, 203–224 (2017)CrossRefGoogle Scholar
  26. 26.
    Staab, S., Studer, R. (eds.): Handbook on Ontologies. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-540-24750-0CrossRefzbMATHGoogle Scholar
  27. 27.
    Su, X., Matskin, M., Rao, J.: Implementing explanation ontology for agent system. In: 2003 Proceedings IEEE/WIC International Conference on Web Intelligence, WI 2003, pp. 330–336. IEEE (2003)Google Scholar
  28. 28.
    Taylor, G., Knudsen, K., Holt, L.S.: Explaining agent behavior. Ann Arbor 1001, 48105 (2006)Google Scholar
  29. 29.
    Tiddi, I., d’Aquin, M., Motta, E.: An ontology design pattern to define explanations. In: Proceedings of the 8th International Conference on Knowledge Capture, 8 p. ACM (2015)Google Scholar
  30. 30.
    van der Waa, J., van Diggelen, J., Neerincx, M.A., Raaijmakers, S.: ICM: an intuitive, model independent and accurate certainty measure for machine learning. In: 10th International Conference on Agents and AI (ICAART 2018) (2018)Google Scholar
  31. 31.
    van der Waa, J., van Diggelen, J., Neerincx, M.A.: The design and validation of an intuitive certainty measure. In: Proceedings of IUI 2018 Workshop on Explainable Smart Systems (2018)Google Scholar
  32. 32.
    van der Waa, J., Robeer, M.J., van Diggelen, J., Brinkhuis, M.J.S., Neerincx, M.A.: Contrastive explanation for machine learning in adaptive learning (in preparation)Google Scholar
  33. 33.
    Wang, W.: Self-management support system for renal transplant patients: understanding adherence and acceptance. Ph.D. thesis. Delft University of Technology, The Netherlands (2017)Google Scholar
  34. 34.
    van Welie, M., van der Veer, G.C.: Pattern languages in interaction design: structure and organization. In: Proceedings of Interact 2003, 1–5 September, Zürich, Switzerland, pp. 527–534, IOS Press, Amsterdam (2003)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Mark A. Neerincx
    • 1
    • 2
  • Jasper van der Waa
    • 1
  • Frank Kaptein
    • 2
  • Jurriaan van Diggelen
    • 1
  1. 1.TNOSoesterbergNetherlands
  2. 2.Delft University of TechnologyDelftNetherlands

Personalised recommendations