Skip to main content
Log in

Positive and negative explanation effects in human–agent teams

  • Research article
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Improving agent capabilities and increasing availability of computing platforms and internet connectivity allows more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human–agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of their partner’s capabilities, and the agent, acting as the task allocator, must adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers’ outlook, including factors, such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust/confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Abdulrahman, A., Richards, D., Bilgin, A.A.: Reason explanation for encouraging behaviour change intention. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, 68–77 (2021)

  2. Anderson, A., Kleinberg, J., Mullainathan, S.: Assessing human error against a benchmark of perfection. ACM Trans. Knowl. Discov. Data (TKDD) 11(4), 1–25 (2017)

    Article  Google Scholar 

  3. Athey, S.C., Bryan, K.A., Gans, J.S.: The allocation of decision authority to human and artificial intelligence. In AEA Papers and Proc 110, 80–84 (2020)

  4. Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)

    Article  Google Scholar 

  5. Brinkman, W.-P.: Design of a questionnaire instrument. In: Handbook of mobile technology research methods, 31–57. Nova Publishers (2009)

  6. Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. CoRR, arXiv:1901.03729 (2019a)

  7. Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, 263–274 (2019b)

  8. Genter, K., Agmon, N., Stone, P.: ???? Role-based Ad hoc teamwork. In: Proceedings of the Plan, Activity, and Intent Recognition Workshop at the Twenty-Fifth Conference on Artificial Intelligence (PAIR-11)

  9. Gervits, F., Thurston, D., Thielstrom, R., Fong, T., Pham, Q., Scheutz, M.: Toward genuine robot teammates: improving human-robot team performance using robot shared mental models. In: AAMAS, 429–437 (2020)

  10. Hauser, D., Paolacci, G., Chandler, J.: Common concerns with MTurk as a participant pool: evidence and solutions (2019)

  11. Hayes-Roth, B.: A blackboard architecture for control. Artif. Intell. 26(3), 251–321 (1985)

    Article  Google Scholar 

  12. Kahneman, D.: Thinking, fast and slow. Macmillan (2011)

  13. Kim, J., Muise, C., Shah, A., Agarwal, S., Shah, J.: ???? Bayesian inference of linear temporal logic specifications for contrastive explanations

  14. Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, 29–38 (2019)

  15. Lin, R., Gal, Y., Kraus, S., Mazliah, Y.: Training with automated agents improves peoples behavior in negotiation and coordination tasks. Decis. Support Syst. (DSS) 60, 1–9 (2014)

    Article  Google Scholar 

  16. Mathieu, J.E., Hollenbeck, J.R., van Knippenberg, D., Ilgen, D.R.: A century of work teams in the journal of applied psychology. J. Appl. Psychol. 102(3), 452 (2017)

    Article  Google Scholar 

  17. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  18. Mosteo, A.R., Montano, L.: A survey of multi-robot task allocation. Instituto de Investigacin en Ingenierła de Aragn (I3A), Tech. Rep (2010)

  19. Mualla, Y., Tchappi, I., Kampik, T., Najjar, A., Calvaresi, D., Abbas-Turki, A., Galland, S., Nicolle, C.: The quest of parsimonious XAI: a human-agent architecture for explanation formulation. Artif. Intell. 302, 103573 (2022)

    Article  MathSciNet  Google Scholar 

  20. Neerincx, M.A., van der Waa, J., Kaptein, F., van Diggelen, J.: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, D. (ed.) Engineering Psychology and Cognitive Ergonomics, pp. 204–214. Springer International Publishing, Cham (2018)

    Chapter  Google Scholar 

  21. Puranam, P., Alexy, O., Reitzig, M.: What’s new about new forms of organizing? Acad. Manag. Rev. 39(2), 162–180 (2014)

    Article  Google Scholar 

  22. Ramchurn, S.D., Huynh, T.D., Ikuno, Y., Flann, J., Wu, F., Moreau, L., Jennings, N.R., Fischer, J.E., Jiang, W., Rodden, T., Simpson, E., Reece, S., Roberts, S.J.: HAC-ER: a disaster response system based on human-agent collectives. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 533–541. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems (2015)

  23. Robinette, P., Wagner, A.R., Howard, A.M.: Building and maintaining trust between humans and guidance robots in an emergency. In: AAAI Spring Symposium: Trust and Autonomous Systems, 78–83. Stanford, CA (2013)

  24. Rosenfeld, A., Agmon, N., Maksimov, O., Kraus, S.: Intelligent agent supporting human-multi-robot team collaboration. Artif. Intell. 252, 211–231 (2017)

    Article  MathSciNet  Google Scholar 

  25. Sanchez, R.P., Bartel, C.M., Brown, E., DeRosier, M.: The acceptability and efficacy of an intelligent social tutoring system. Comput. Educ. 78, 321–332 (2014)

    Article  Google Scholar 

  26. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum. Comput. Stud. 146, 102551 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bryan Lavender.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lavender, B., Abuhaimed, S. & Sen, S. Positive and negative explanation effects in human–agent teams. AI Ethics 4, 47–56 (2024). https://doi.org/10.1007/s43681-023-00396-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-023-00396-0

Keywords

Navigation