Advertisement

Effects of Agents’ Transparency on Teamwork

  • Silvia TulliEmail author
  • Filipa Correia
  • Samuel Mascarenhas
  • Samuel Gomes
  • Francisco S. Melo
  • Ana Paiva
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11763)

Abstract

Transparency in the field of human-machine interaction and artificial intelligence has seen a growth of interest in the past few years. Nonetheless, there are still few experimental studies on how transparency affects teamwork, in particular in collaborative situations where the strategies of others, including agents, may seem obscure.

We explored this problem using a collaborative game scenario with a mixed human-agent team. We investigated the role of transparency in the agents’ decisions, by having agents that reveal and tell the strategies they adopt in the game, in a manner that makes their decisions transparent to the other team members. The game embraces a social dilemma where a human player can choose to contribute to the goal of the team (cooperate) or act selfishly in the interest of his or her individual goal (defect). We designed a between-subjects experimental study, with different conditions, manipulating the transparency in a team. The results showed an interaction effect between the agents’ strategy and transparency on trust, group identification and human-likeness. Our results suggest that transparency has a positive effect in terms of people’s perception of trust, group identification and human likeness when the agents use a tit-for-tat or a more individualistic strategy. In fact, adding transparent behaviour to an unconditional cooperator negatively affects the measured dimensions.

Keywords

Transparency Autonomous agents Multi-agent systems Public goods game Social dilemma 

References

  1. 1.
    Allen, K., Bergin, R.: Exploring trust, group satisfaction, and performance in geographically dispersed and co-located university technology commercialization teams. In: Proceedings of the NCIIA 8th Annual Meeting: Education that Works, pp. 18–20 (2004)Google Scholar
  2. 2.
    Axelrod, R.: On six advances in cooperation theory. Anal. Kritik 22, 130–151 (2000).  https://doi.org/10.1515/auk-2000-0107CrossRefGoogle Scholar
  3. 3.
    Bartneck, C., Kulic, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81 (2008).  https://doi.org/10.1007/s12369-008-0001-3CrossRefGoogle Scholar
  4. 4.
    Bornstein, G., Nagel, R., Gneezy, U., Nagel, R.: The effect of intergroup competition on group coordination: an experimental study. Games Econ. Behav. 41, 1–25 (2002).  https://doi.org/10.2139/ssrn.189434CrossRefzbMATHGoogle Scholar
  5. 5.
    Burton-Chellew, M.N., Mouden, C.E., West, S.A.: Conditional cooperation and confusion in public-goods experiments. Proc. Nat. Acad. Sci. U.S.A. 113(5), 1291–6 (2016)CrossRefGoogle Scholar
  6. 6.
    Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19(3), 259–282 (2018).  https://doi.org/10.1080/1463922X.2017.1315750CrossRefGoogle Scholar
  7. 7.
    Chen, J.Y., Barnes, M.J.: Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum.-Mach. Syst. 44(1), 13–29 (2014)CrossRefGoogle Scholar
  8. 8.
    Chen, J.Y., Barnes, M.J.: Agent transparency for human-agent teaming effectiveness. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1381–1385. IEEE (2015)Google Scholar
  9. 9.
    Correia, F., et al.: Exploring prosociality in human-robot teams. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 143–151. IEEE (2019)Google Scholar
  10. 10.
  11. 11.
    Davis, D., Korenok, O., Reilly, R.: Cooperation without coordination: signaling, types and tacit collusion in laboratory oligopolies. Exp. Econ. 13(1), 45–65 (2010)CrossRefGoogle Scholar
  12. 12.
    Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)Google Scholar
  13. 13.
    Fiala, L., Suetens, S.: Transparency and cooperation in repeated dilemma games: a meta study. Exp. Econ. 20(4), 755–771 (2017)CrossRefGoogle Scholar
  14. 14.
    Fredrickson, J.E.: Prosocial behavior and teamwork in online computer games (2013)Google Scholar
  15. 15.
    Fudenberg, D., Maskin, E.: The Folk theorem in repeated games with discounting or with incomplete. Information (2009).  https://doi.org/10.1142/9789812818478_0011CrossRefzbMATHGoogle Scholar
  16. 16.
    Helldin, T.: Transparency for future semi-automated systems, Ph.D. dissertation. Orebro University (2014)Google Scholar
  17. 17.
    Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, pp. 241–250. ACM (2000)Google Scholar
  18. 18.
    Sedano, C.I., Carvalho, M., Secco, N., Longstreet, C.: Collaborative and cooperative games: facts and assumptions, pp. 370–376, May 2013.  https://doi.org/10.1109/CTS.2013.6567257
  19. 19.
    Klien, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004).  https://doi.org/10.1109/MIS.2004.74CrossRefGoogle Scholar
  20. 20.
    Lange, P., Otten, W., De Bruin, E.M.N., Joireman, J.: Development of prosocial, individualistic, and competitive orientations: theory and preliminary evidence. J. Pers. Soc. Psychol. 73, 733–46 (1997).  https://doi.org/10.1037//0022-3514.73.4.733CrossRefGoogle Scholar
  21. 21.
    Leach, C.W., et al.: Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification. J. Pers. Soc. Psychol. 95(1), 144 (2008)CrossRefGoogle Scholar
  22. 22.
    Lee, C.C., Chang, J.W.: Does trust promote more teamwork? Modeling online game players’ teamwork using team experience as a moderator. Cyberpsychology Behav. Soc. Netw. 16(11), 813–819 (2013).  https://doi.org/10.1089/cyber.2012.0461. pMID: 23848999CrossRefGoogle Scholar
  23. 23.
    McEwan, G., Gutwin, C., Mandryk, R.L., Nacke, L.: “i’m just here to play games”: social dynamics and sociality in an online game site. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW 2012, pp. 549–558. ACM, New York (2012).  https://doi.org/10.1145/2145204.2145289
  24. 24.
    Mercado, J.E., Rupp, M.A., Chen, J.Y., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-uxv management. Hum. Factors 58(3), 401–415 (2016)CrossRefGoogle Scholar
  25. 25.
    Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.M.: Manipulating and measuring model interpretability. CoRR arXiv:abs/1802.07810 (2018)
  26. 26.
    Rader, E., Cotter, K., Cho, J.: Explanations as mechanisms for supporting algorithmic transparency, pp. 103:1–103:13 (2018).  https://doi.org/10.1145/3173574.3173677
  27. 27.
    Segal, U., Sobel, J.: Tit for tat: foundations of preferences for reciprocity in strategic settings. J. Econ. Theory 136(1), 197–216 (2007). https://EconPapers.repec.org/RePEc:eee:jetheo:v:136:y:2007:i:1:p:197-216MathSciNetCrossRefGoogle Scholar
  28. 28.
    Zelmer, J.: Linear public goods experiments: a meta-analysis. Quantitative studies in economics and population research reports 361. McMaster University, June 2001. https://ideas.repec.org/p/mcm/qseprr/361.html

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science and Engineering, INESC-ID and Instituto Superior TécnicoUniversidade de LisboaPorto SalvoPortugal

Personalised recommendations