Skip to main content

Evaluating a Mechanism for Explaining BDI Agent Behaviour

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2023)

Abstract

Explainability of autonomous systems is important to supporting the development of appropriate levels of trust in the system, as well as supporting system predictability. Previous work has proposed an explanation mechanism for Belief-Desire-Intention (BDI) agents that uses folk psychological concepts, specifically beliefs, desires, and valuings. In this paper we evaluate this mechanism by conducting a survey. We consider a number of explanations, and assess to what extent they are considered believable, acceptable, and comprehensible, and which explanations are preferred. We also consider the relationship between trust in the specific autonomous system, and general trust in technology. We find that explanations that include valuings are particularly likely to be preferred by the study participants, whereas those explanations that include links are least likely to be preferred. We also found evidence that single-factor explanations, as used in some previous work, are too short.

The authors were at the University of Otago, New Zealand, when most of the work was done.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    There is also empirical evidence that humans use these constructs to explain the behaviour of robots [13, 33].

  2. 2.

    Terminology: we use “goal” and “desire” interchangeably.

  3. 3.

    Defined by Malle as things that “directly indicate the positive or negative affect toward the action or its outcome”. Whereas values are generic (e.g. benevolence, security [30]), valuings are about a given action or outcome. Valuings can arise from values, but can also be directly specified without needing to be linked to higher-level values. In our work we represent valuings as preferences over options [38, Sect. 2].

  4. 4.

    We also consider (Sect. 4.4) the question: “to what extent is trust in a given system determined by a person’s more general attitudes towards technology, and towards Artificial Intelligence?”.

  5. 5.

    The survey can be found at: https://www.dropbox.com/s/ec6fg3u1rqhytcb/Trust-Autonomous-Survey.pdf.

  6. 6.

    Ethics approval was given by University of Otago (Category B, D18/231).

  7. 7.

    Where 1 was labelled “Strongly Disagree”, 7 was labelled “Strongly Agree”, and 2–6 were not labelled.

  8. 8.

    We use a significance level of 0.005 rather than 0.05 to avoid type II errors, given the number of tests performed. The significance level is calculated as \(\root 10 \of {0.95} = 0.9948838\), giving a threshold for significance of around 0.005.

  9. 9.

    Although for E1-E3 it is only at \(p=0.0273\).

  10. 10.

    As before, we use a significance level of 0.005 rather than 0.05 to avoid type II errors, given the number of tests performed.

  11. 11.

    For the AI group of questions, the analysis indicated that dropping the third question would improve the alpha from 0.69 to 0.79, which was done, meaning that we used a total of 10 questions. The dropped question was: “I think that current problems with use of AI (bias, breach of privacy, etc.) will be solved in the short term”.

  12. 12.

    Explained as: “With a natural explanation we mean an explanation that sounds normal and is understandable, an explanation that you or other people could give.”.

  13. 13.

    Explained as: “Indicate how useful the explanations would be for you in learning how to make pancakes.”.

  14. 14.

    The tree of goals, beliefs, and actions.

  15. 15.

    One child was excluded from the data analysis due to a data glitch.

  16. 16.

    Since their virtual assistant was only providing advice, rather than performing a sequence of actions, it did not make sense to have link explanations.

  17. 17.

    Specifically, their explanations corresponding in structure with our E2 (valuing and belief) and E3 (valuing) were most preferred.

References

  1. Abdulrahman, A., Richards, D., Bilgin, A.A.: Reason explanation for encouraging behaviour change intention. In: Dignum, F., Lomuscio, A., Endriss, U., Nowé, A. (eds.) AAMAS 2021: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, 3–7 May 2021, pp. 68–77. ACM (2021). https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p68.pdf

  2. Abdulrahman, A., Richards, D., Bilgin, A.A.: Exploring the influence of a user-specific explainable virtual advisor on health behaviour change intentions. Auton. Agents Multi Agent Syst. 36(1), 25 (2022). https://doi.org/10.1007/s10458-022-09553-x

    Article  Google Scholar 

  3. Allison, P.D., Christakis, N.A.: Logit models for sets of ranked items. Sociol. Methodol. 24, 199–228 (1994). https://www.jstor.org/stable/270983

  4. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Elkind, E., Veloso, M., Agmon, N., Taylor, M.E. (eds.) Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2019, Montreal, QC, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019). https://dl.acm.org/citation.cfm?id=3331806

  5. Bratman, M.E., Israel, D.J., Pollack, M.E.: Plans and resource-bounded practical reasoning. Comput. Intell. 4, 349–355 (1988)

    Article  Google Scholar 

  6. Bratman, M.E.: Intentions, Plans, and Practical Reason. Harvard University Press, Cambridge (1987)

    Google Scholar 

  7. Broekens, J., Harbers, M., Hindriks, K., van den Bosch, K., Jonker, C., Meyer, J.-J.: Do you get it? user-evaluated explainable BDI agents. In: Dix, J., Witteveen, C. (eds.) MATES 2010. LNCS (LNAI), vol. 6251, pp. 28–39. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-16178-0_5

    Chapter  Google Scholar 

  8. Cranefield, S., Oren, N., Vasconcelos, W.W.: Accountability for practical reasoning agents. In: Lujak, M. (ed.) AT 2018. LNCS (LNAI), vol. 11327, pp. 33–48. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17294-7_3

    Chapter  Google Scholar 

  9. Cranefield, S., Winikoff, M., Dignum, V., Dignum, F.: No pizza for you: value-based plan selection in BDI agents. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 178–184 (2017). DOI: https://doi.org/10.24963/ijcai.2017/26

  10. Dennis, L.A., Oren, N.: Explaining BDI agent behaviour through dialogue. In: Dignum, F., Lomuscio, A., Endriss, U., Nowé, A. (eds.) AAMAS 2021: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, 3–7 May 2021, pp. 429–437. ACM (2021), https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p429.pdf

  11. Dennis, L.A., Oren, N.: Explaining BDI agent behaviour through dialogue. Auton. Agents Multi Agent Syst. 36(1), 29 (2022). https://doi.org/10.1007/s10458-022-09556-8

    Article  Google Scholar 

  12. Floridi, L., et al.: Ai4people–an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. 28(4), 689–707 (2018). https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  13. de Graaf, M.M.A., Malle, B.F.: People’s explanations of robot behavior subtly reveal mental state inferences. In: 14th ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019, Daegu, South Korea, 11–14 March 2019, pp. 239–248. IEEE (2019). https://doi.org/10.1109/HRI.2019.8673308

  14. Harbers, M.: Explaining agent behavior in virtual training. SIKS dissertation series no. 2011–35, SIKS (Dutch Research School for Information and Knowledge Systems) (2011)

    Google Scholar 

  15. Harbers, M., van den Bosch, K., Meyer, J.C.: Design and evaluation of explainable BDI agents. In: Huang, J.X., Ghorbani, A.A., Hacid, M., Yamaguchi, T. (eds.) Proceedings of the 2010 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2010, Toronto, Canada, 31 August–3 September 2010, pp. 125–132. IEEE Computer Society Press (2010). https://doi.org/10.1109/WI-IAT.2010.115

  16. High-Level Expert Group on Artificial Intelligence: The assessment list for trustworthy artificial intelligence (2020). https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment

  17. Kaptein, F., Broekens, J., Hindriks, K.V., Neerincx, M.A.: Personalised self-explanation by robots: the role of goals versus beliefs in robot-action explanation for children and adults. In: 26th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2017, Lisbon, Portugal, 28 August–1 September 2017, pp. 676–682. IEEE (2017). https://doi.org/10.1109/ROMAN.2017.8172376

  18. Kaptein, F., Broekens, J., Hindriks, K.V., Neerincx, M.A.: The role of emotion in self-explanations by cognitive agents. In: Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACII Workshops 2017, San Antonio, TX, USA, 23–26 October 2017, pp. 88–93. IEEE Computer Society (2017). https://doi.org/10.1109/ACIIW.2017.8272595

  19. Kaptein, F., Broekens, J., Hindriks, K.V., Neerincx, M.A.: Evaluating cognitive and affective intelligent agent explanations in a long-term health-support application for children with type 1 diabetes. In: 8th International Conference on Affective Computing and Intelligent Interaction, ACII 2019, Cambridge, United Kingdom, 3–6 September 2019, pp. 1–7. IEEE (2019). https://doi.org/10.1109/ACII.2019.8925526

  20. Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Singh, S., Markovitch, S. (eds.) Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, 4–9 February 2017, pp. 4762–4764. AAAI Press (2017). https://aaai.org/ocs/index.php/IAAI/IAAI17/paper/view/15046

  21. Malle, B.F.: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. The MIT Press, Cambridge (2004). ISBN 0-262-13445-4

    Book  Google Scholar 

  22. Mcknight, D.H., Carter, M., Thatcher, J.B., Clay, P.F.: Trust in a specific technology: an investigation of its components and measures. ACM Trans. Manag. Inf. Syst. 2(2), 12:1–12:25 (2011). https://doi.org/10.1145/1985347.1985353

  23. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1145/1824760.1824761

    Article  MathSciNet  MATH  Google Scholar 

  24. Mualla, Y., et al.: The quest of parsimonious XAI: a human-agent architecture for explanation formulation. Artif. Intell. 302, 103573 (2022). https://doi.org/10.1016/j.artint.2021.103573

    Article  MathSciNet  MATH  Google Scholar 

  25. Müller, J.P., Fischer, K.: Application impact of multi-agent systems and technologies: a survey. In: Shehory, O., Sturm, A. (eds.) Agent-Oriented Software Engineering, pp. 27–53. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54432-3_3

    Chapter  Google Scholar 

  26. Munroe, S., Miller, T., Belecheanu, R., Pechoucek, M., McBurney, P., Luck, M.: Crossing the agent technology chasm: experiences and challenges in commercial applications of agents. Knowl. Eng. Rev. 21(4), 345–392 (2006)

    Article  Google Scholar 

  27. Rao, A.S., Georgeff, M.P.: An abstract architecture for rational agents. In: Rich, C., Swartout, W., Nebel, B. (eds.) Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, pp. 439–449. Morgan Kaufmann Publishers, San Mateo (1992)

    Google Scholar 

  28. van Riemsdijk, M.B., Jonker, C.M., Lesser, V.R.: Creating socially adaptive electronic partners: Interaction, reasoning and ethical challenges. In: Weiss, G., Yolum, P., Bordini, R.H., Elkind, E. (eds.) Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1201–1206. ACM (2015). https://dl.acm.org/citation.cfm?id=2773303

  29. Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: Bartneck, C., Nagai, Y., Paiva, A., Sabanovic, S. (eds.) The Eleventh ACM/IEEE International Conference on Human Robot Interation, HRI 2016, Christchurch, New Zealand, 7–10 March 2016, pp. 101–108. IEEE/ACM (2016). https://doi.org/10.1109/HRI.2016.7451740

  30. Schwartz, S.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1), 11 (2012). https://doi.org/10.9707/2307-0919.1116

    Article  Google Scholar 

  31. Sklar, E.I., Azhar, M.Q.: Explanation through argumentation. In: Imai, M., Norman, T., Sklar, E., Komatsu, T. (eds.) Proceedings of the 6th International Conference on Human-Agent Interaction, HAI 2018, Southampton, United Kingdom, 15–18 December 2018, pp. 277–285. ACM (2018). https://doi.org/10.1145/3284432.3284470

  32. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE (2016). https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

  33. Thellman, S., Silvervarg, A., Ziemke, T.: Folk-psychological interpretation of human vs. humanoid robot behavior: exploring the intentional stance toward robots. Front. Psychol. 8, 1–14 (2017). https://doi.org/10.3389/fpsyg.2017.01962

    Article  Google Scholar 

  34. Verhagen, R.S., Neerincx, M.A., Tielman, M.L.: A two-dimensional explanation framework to classify AI as incomprehensible, interpretable, or understandable. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2021. LNCS (LNAI), vol. 12688, pp. 119–138. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82017-6_8

    Chapter  Google Scholar 

  35. Winikoff, M.: Debugging agent programs with “Why?" questions. In: Proceedings of the 16th Conference on Autonomous Agents and Multiagent Systems, pp. 251–259 (2017)

    Google Scholar 

  36. Winikoff, M.: Towards trusting autonomous systems. In: El Fallah-Seghrouchni, A., Ricci, A., Son, T.C. (eds.) EMAS 2017. LNCS (LNAI), vol. 10738, pp. 3–20. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91899-0_1

    Chapter  Google Scholar 

  37. Winikoff, M., Dignum, V., Dignum, F.: Why bad coffee? explaining agent plans with valuings. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11094, pp. 521–534. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_47

    Chapter  Google Scholar 

  38. Winikoff, M., Sidorenko, G., Dignum, V., Dignum, F.: Why bad coffee? explaining BDI agent behaviour with valuings. Artif. Intell. 300, 103554 (2021). https://doi.org/10.1016/j.artint.2021.103554

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr Damien Mather, at the University of Otago, for statistical advice. This work was supported by a University of Otago Research Grant (UORG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Winikoff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Winikoff, M., Sidorenko, G. (2023). Evaluating a Mechanism for Explaining BDI Agent Behaviour. In: Calvaresi, D., et al. Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2023. Lecture Notes in Computer Science(), vol 14127. Springer, Cham. https://doi.org/10.1007/978-3-031-40878-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40878-6_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40877-9

  • Online ISBN: 978-3-031-40878-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics