Skip to main content

The Impact of Gender and Personality in Human-AI Teaming: The Case of Collaborative Question Answering

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2023 (INTERACT 2023)

Abstract

This paper discusses the results of an exploratory study aimed at investigating the impact of conversational agents (CAs) and specifically their agential characteristics on collaborative decision-making processes. The study involved 29 participants divided into 8 small teams engaged in a question-and-answer trivia-style game with the support of a text-based CA, characterized by two independent binary variables: personality (gentle and cooperative vs blunt and uncooperative) and gender (female vs male). A semi-structured group interview was conducted at the end of the experimental sessions to investigate the perceived utility and level of satisfaction with the CAs. Our results show that when users interact with a gentle and cooperative CA, their user satisfaction is higher. Furthermore, female CAs are perceived as more useful and satisfying to interact with than male CAs. We show that group performance improves through interaction with the CAs, confirming that a stereotype favoring the female with a gentle and cooperative personality combination exists in regard to perceived satisfaction, even though this does not lead to greater perceived utility. Our study extends the current debate about the possible correlation between CA characteristics and human acceptance and suggests future research to investigate the role of gender bias and related biases in human-AI teaming.

F. Milella, C. Natali and T. Scantamburlo—Authors equally contributed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adamopoulou, E., Moussiades, L.: An overview of chatbot technology. In: Maglogiannis, I., Iliadis, L., Pimenidis, E. (eds.) AIAI 2020. IAICT, vol. 584, pp. 373–383. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49186-4_31

    Chapter  Google Scholar 

  2. Allouch, M., Azaria, A., Azoulay, R.: Conversational agents: goals, technologies, vision and challenges. Sensors 21(24), 8448 (2021)

    Article  Google Scholar 

  3. Ammari, T., Kaye, J., Tsai, J.Y., Bentley, F.: Music, search, and IoT: how people (really) use voice assistants. ACM Trans. Comput. Hum. Interact. 26(3), 1–28 (2019)

    Article  Google Scholar 

  4. Baxter, D., McDonnell, M., McLoughlin, R.: Impact of chatbot gender on user’s stereotypical perception and satisfaction. In: Proceedings of the 32nd International BCS Human Computer Interaction Conference, vol. 32, pp. 1–5 (2018)

    Google Scholar 

  5. Bogg, A., Birrell, S., Bromfield, M.A., Parkes, A.M.: Can we talk? How a talking agent can improve human autonomy team performance. Theor. Issues Ergon. Sci. 22(4), 488–509 (2021)

    Article  Google Scholar 

  6. Borau, S., Otterbring, T., Laporte, S., Fosso Wamba, S.: The most human bot: female gendering increases humanness perceptions of bots and acceptance of AI. Psychol. Market. 38(7), 1052–1068 (2021)

    Article  Google Scholar 

  7. Brahnam, S., De Angeli, A.: Gender affordances of conversational agents. Interact. Comput. 24(3), 139–153 (2012)

    Article  Google Scholar 

  8. Brewer, L.: General psychology: required reading. Deiner Education Fund: Salt Lake City, CT, USA, p. 323 (2019)

    Google Scholar 

  9. Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)

    Google Scholar 

  10. Browne, J.T.: Wizard of OZ prototyping for machine learning experiences. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019)

    Google Scholar 

  11. Cabitza, F., Campagner, A., Sconfienza, L.M.: Studying human-AI collaboration protocols: the case of the Kasparov’s law in radiological double reading. Health Inf. Sci. Syst. 9(1), 1–20 (2021)

    Article  Google Scholar 

  12. Cabitza, F., Campagner, A., Simone, C.: The need to move away from agential-AI: empirical investigations, useful concepts and open issues. Int. J. Hum. Comput. Stud. 155, 102696 (2021)

    Article  Google Scholar 

  13. Callejas, Z., López-Cózar, R., Ábalos, N., Griol, D.: Affective conversational agents: the role of personality and emotion in spoken interactions. In: Conversational Agents and Natural Language Interaction: Techniques and Effective Practices, pp. 203–222. IGI Global (2011)

    Google Scholar 

  14. Carli, L.L.: Gender and social influence. J. Soc. Issues 57(4), 725–741 (2001)

    Article  Google Scholar 

  15. Chaves, A.P., Gerosa, M.A.: How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design. Int. J. Hum.-Comput. Interact. 37(8), 729–758 (2019)

    Article  Google Scholar 

  16. Costa, P.: Conversing with personal digital assistants: on gender and artificial intelligence. J. Sci. Technol. Arts 10(3), 59–72 (2018)

    Google Scholar 

  17. Dale, R.: The return of the chatbots. Nat. Lang. Eng. 22(5), 811–817 (2016)

    Article  Google Scholar 

  18. De Fauw, J., et al.: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018)

    Article  Google Scholar 

  19. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  20. Feine, J., Gnewuch, U., Morana, S., Maedche, A.: Gender bias in chatbot design. In: Følstad, A., et al. (eds.) CONVERSATIONS 2019. LNCS, vol. 11970, pp. 79–93. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39540-7_6

    Chapter  Google Scholar 

  21. Fogg, B.J.: Persuasive technology: using computers to change what we think and do. Ubiquity 2002(December), 2 (2002)

    Article  Google Scholar 

  22. Gigerenzer, G.: How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. Penguin, UK (2022)

    Google Scholar 

  23. Gottinger, H.W., Weimann, P.: Intelligent decision support systems. Decis. Support Syst. 8(4), 317–332 (1992)

    Article  MATH  Google Scholar 

  24. Green, B., Chen, Y.: The principles and limits of algorithm-in-the-loop decision making. Proc. ACM Hum. Comput. Interact. 3(CSCW), 1–24 (2019)

    Google Scholar 

  25. Grigoryan, A.: “you are too blunt, too ambitious, too confident”: cultural messages that undermine women’s paths to advancement and leadership in academia and beyond. In: Surviving Sexism in Academia, pp. 243–249. Routledge (2017)

    Google Scholar 

  26. Hanna, N., Richards, D., et al.: Do birds of a feather work better together? The impact of virtual agent personality on a shared mental model with humans during collaboration. In: COOS@ AAMAS, pp. 28–37 (2015)

    Google Scholar 

  27. Hester, H.: Technology becomes her. New Vistas 3(1), 46–50 (2017)

    Google Scholar 

  28. Jain, M., Kumar, P., Kota, R., Patel, S.N.: Evaluating and informing the design of chatbots. In: Proceedings of the 2018 Designing Interactive Systems Conference, pp. 895–906 (2018)

    Google Scholar 

  29. Johnson, M.T., Vera, A.H.: No AI is an island: the case for teaming intelligence. AI Mag. 40, 16–28 (2019)

    Google Scholar 

  30. Jung, E.H., Waddell, T.F., Sundar, S.S.: Feminizing robots: user responses to gender cues on robot body and screen. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3107–3113 (2016)

    Google Scholar 

  31. Kang, M.: A study of chatbot personality based on the purposes of chatbot. J. Korea Contents Assoc. 18(5), 319–329 (2018)

    Google Scholar 

  32. Kim, H., Koh, D.Y., Lee, G., Park, J.-M., Lim, Y.-K.: Designing personalities of conversational agents. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019)

    Google Scholar 

  33. Kim, Y., Baylor, A.L., Shen, E.: Pedagogical agents as learning companions: the impact of agent emotion and gender. J. Comput. Assist. Learn. 23(3), 220–234 (2007)

    Article  Google Scholar 

  34. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., Mullainathan, S.: Human decisions and machine predictions. Q. J. Econ. 133(1), 237–293 (2018)

    MATH  Google Scholar 

  35. Lee, D.E.: Ideal female-male traits and evaluation of favorability. Percept. Motor Skills 50(3_suppl), 1039–1046 (1980)

    Google Scholar 

  36. Lessio, N., Morris, A.: Toward design archetypes for conversational agent personality. In: 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 3221–3228. IEEE (2020)

    Google Scholar 

  37. Liew, T.W., Tan, S.-M.: Social cues and implications for designing expert and competent artificial agents: a systematic review. Telematics Inform. 65, 101721 (2021)

    Article  Google Scholar 

  38. Malone, T.W.: How can human-computer “superminds” develop business strategies? Future Manag. AI World (2019)

    Google Scholar 

  39. McDonnell, M., Baxter, D.: Chatbots and gender stereotyping. Interact. Comput. 31(2), 116–121 (2019)

    Article  Google Scholar 

  40. Mehra, B.: Chatbot personality preferences in global south urban English speakers. Soc. Sci. Hum. Open 3(1), 100131 (2021)

    MathSciNet  Google Scholar 

  41. Nag, P., Yalçın, Ö.N.: Gender stereotypes in virtual agents. In: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, pp. 1–8 (2020)

    Google Scholar 

  42. Nass, C., Moon, Y., Green, N.: Are machines gender neutral? Gender-stereotypic responses to computers with voices. J. Appl. Soc. Psychol. 27(10), 864–876 (1997)

    Article  Google Scholar 

  43. Nilsson, N.J.: The Quest for Artificial Intelligence. Cambridge University Press (2009)

    Google Scholar 

  44. Noble, H., Mitchell, G.: What is grounded theory? Evid. Based Nurs. 19(2), 34–35 (2016)

    Article  Google Scholar 

  45. Norman, D.A.: Emotional design: why we love (or hate) everyday things. In: Civitas Books (2004)

    Google Scholar 

  46. Parker, A., Tritter, J.: Focus group method and methodology: current practice and recent debate. Int. J. Res. Method Educ. 29(1), 23–37 (2006)

    Article  Google Scholar 

  47. Pennebaker, J.W., King, L.A.: Linguistic styles: language use as an individual difference. J. Pers. Soc. Psychol. 77(6), 1296 (1999)

    Article  Google Scholar 

  48. Phillips-Wren, G., Mora, M., Forgionne, G.A., Gupta, J.N.: An integrative evaluation framework for intelligent decision support systems. Eur. J. Oper. Res. 195(3), 642–652 (2009)

    Article  MATH  Google Scholar 

  49. Pomerol, J.-C.: Artificial intelligence and human decision making. Eur. J. Oper. Res. 99(1), 3–25 (1997)

    Article  MATH  Google Scholar 

  50. Porcheron, M., Fischer, J.E., Reeves, S., Sharples, S.: Voice interfaces in everyday life. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2018)

    Google Scholar 

  51. Rapp, A., Curti, L., Boldi, A.: The human side of human-chatbot interaction: a systematic literature review of ten years of research on text-based chatbots. Int. J. Hum. Comput. Stud. 151, 102630 (2021)

    Article  Google Scholar 

  52. Roy, Q., Ghafurian, M., Li, W., Hoey, J.: Users, tasks, and conversational agents: a personality study. In: Proceedings of the 9th International Conference on Human-Agent Interaction, pp. 174–182 (2021)

    Google Scholar 

  53. Ruane, E., Birhane, A., Ventresque, A.: Conversational AI: social and ethical considerations. In: AICS, pp. 104–115 (2019)

    Google Scholar 

  54. Ruane, E., Farrell, S., Ventresque, A.: User perception of text-based chatbot personality. In: Følstad, A., et al. (eds.) CONVERSATIONS 2020. LNCS, vol. 12604, pp. 32–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68288-0_3

    Chapter  Google Scholar 

  55. Russell, S.J., Norvig, P.: Artificial Intelligence a Modern Approach. Pearson Education Inc. (2010)

    Google Scholar 

  56. Sanny, L., Susastra, A., Roberts, C., Yusramdaleni, R.: The analysis of customer satisfaction factors which influence chatbot acceptance in Indonesia. Manag. Sci. Lett. 10(6), 1225–1232 (2020)

    Article  Google Scholar 

  57. Shani, C., Libov, A., Tolmach, S., Lewin-Eytan, L., Maarek, Y., Shahaf, D.: “alexa, do you want to build a snowman?” Characterizing playful requests to conversational agents. In: CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–7 (2022)

    Google Scholar 

  58. Sharda, R., Barr, S.H., McDonnell, J.C.: Decision support system effectiveness: a review and an empirical test. Manage. Sci. 34(2), 139–159 (1988)

    Article  Google Scholar 

  59. Shawar, B.A., Atwell, E.S.: Using corpora in machine-learning chatbot systems. Int. J. Corpus Linguist. 10(4), 489–516 (2005)

    Article  Google Scholar 

  60. Smestad, T.L., Volden, F.: Chatbot personalities matters. In: Bodrunova, S.S., et al. (eds.) INSCI 2018. LNCS, vol. 11551, pp. 170–181. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17705-8_15

    Chapter  Google Scholar 

  61. Soenksen, L.R., et al.: Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images. Sci. Trans. Med. 13(581), eabb3652 (2021)

    Google Scholar 

  62. Terveen, L.G.: Overview of human-computer collaboration. Knowl.-Based Syst. 8(2–3), 67–81 (1995)

    Article  Google Scholar 

  63. Vanderlyn, L., Weber, G., Neumann, M., Väth, D., Meyer, S., Vu, N.T.: “it seemed like an annoying woman”: on the perception and ethical considerations of affective language in text-based conversational agents. In: Proceedings of the 25th Conference on Computational Natural Language Learning, pp. 44–57 (2021)

    Google Scholar 

  64. Verhagen, T., Van Nes, J., Feldberg, F., Van Dolen, W.: Virtual customer service agents: using social presence and personalization to shape online service encounters. J. Comput.-Mediat. Commun. 19(3), 529–545 (2014)

    Article  Google Scholar 

  65. Völkel, S.T., Kaya, L.: Examining user preference for agreeableness in chatbots. In: CUI 2021–3rd Conference on Conversational User Interfaces, pp. 1–6 (2021)

    Google Scholar 

  66. Völkel, S.T., Schoedel, R., Kaya, L., Mayer, S.: User perceptions of extraversion in chatbots after repeated use. In: CHI Conference on Human Factors in Computing Systems, pp. 1–18 (2022)

    Google Scholar 

  67. Wang, L., et al.: Cass: towards building a social-support chatbot for online health community. Proc. ACM Hum. Comput. Interact. 5(CSCW1), 1–31 (2021)

    Google Scholar 

  68. Xiao, H., Reid, D., Marriott, A., Gulland, E.K.: An adaptive personality model for ECAs. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 637–645. Springer, Heidelberg (2005). https://doi.org/10.1007/11573548_82

    Chapter  Google Scholar 

  69. Xiao, J., Stasko, J., Catrambone, R.: The role of choice and customization on users’ interaction with embodied conversational agents: effects on perception and performance. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1293–1302 (2007)

    Google Scholar 

  70. Xiao, Z., Zhou, M.X., Fu, W.-T.: Who should be my teammates: using a conversational agent to understand individuals and help teaming. In: Proceedings of the 24th International Conference on Intelligent User Interfaces (2019)

    Google Scholar 

  71. Zhou, M.X., Wang, C., Mark, G., Yang, H., Xu, K.: Building real-world chatbot interviewers: lessons from a wizard-of-OZ field study. In: IUI Workshops (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frida Milella .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Milella, F., Natali, C., Scantamburlo, T., Campagner, A., Cabitza, F. (2023). The Impact of Gender and Personality in Human-AI Teaming: The Case of Collaborative Question Answering. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14143. Springer, Cham. https://doi.org/10.1007/978-3-031-42283-6_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42283-6_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42282-9

  • Online ISBN: 978-3-031-42283-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics