Skip to main content

AI in the Human Loop: The Impact of Differences in Digital Assistant Roles on the Personal Values of Users

  • 457 Accesses

Part of the Lecture Notes in Computer Science book series (LNCS,volume 14144)


As AI systems become increasingly prevalent in our daily lives and work, it is essential to contemplate their social role and how they interact with us. While functionality and increasingly explainability and trustworthiness are often the primary focus in designing AI systems, little consideration is given to their social role and the effects on human-AI interactions. In this paper, we advocate for paying attention to social roles in AI design. We focus on an AI healthcare application and present three possible social roles of the AI system within it to explore the relationship between the AI system and the user and its implications for designers and practitioners. Our findings emphasise the need to think beyond functionality and highlight the importance of considering the social role of AI systems in shaping meaningful human-AI interactions.


  • Human-AI collaboration
  • Human-AI relationship
  • Social AI
  • Value Sensitive Design

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions


  1. 1.

    Interface was designed based on the research of the Positive Health Institute (

  2. 2.

    ATLAS.ti Scientific Software Development GmbH [ATLAS.ti 22 Windows]. (2022). Retrieved from


  1. Agarwal, S., Mishra, S.: Responsible AI Implementing Ethical and Unbiased Algorithms. Springer International Publishing (2021).

  2. Christian, B.: The Alignment Problem: Machine Learning and Human Values. WW Norton & Company, New York (2020)

    Google Scholar 

  3. Guzman, A.L., Lewis, S.C.: Artificial intelligence and communication: a human-machine communication research agenda. New Media Soc. 22(1), 70–86 (2020)

    CrossRef  Google Scholar 

  4. Kim, J., Merrill, K., Jr., Collins, C.: AI as a friend or assistant: the mediating role of perceived usefulness in social AI vs. functional AI. Telematics Inform. 64, 101694 (2021)

    CrossRef  Google Scholar 

  5. Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. (TOCHI) 12(2), 293–327 (2005)

    CrossRef  Google Scholar 

  6. Gabarro, J.: The development of working relationships. In: Galegher, J., Kraut, R., Eido, C. (eds.) Intellectual Teamwork: Social and Technological Foundations of Cooperative Work, pp. 79–110. Lawrence Erlbaum Associates, Hillsdale, New Jersey (1990)

    Google Scholar 

  7. Holtzblatt, K., Beyer, H.: Contextual Design: Defining Customer-Centered Systems. Elsevier (1997)

    Google Scholar 

  8. Berscheid, E., Reis, H.T.: Attraction and close relationships. In: Gilbert, D.T., Fiske, S.T., Lindzey, G. (eds.) The Handbook of Social Psychology, pp. 193–281. McGraw-Hill (1998)

    Google Scholar 

  9. Petty, R.E., Wegener, D.T.: Attitude change: multiple roles for persuasion variables. In: Gilbert, D.T., Fiske, S.T., Lindzey, G. (eds.) The Handbook of Social Psychology, pp. 323–390. McGraw-Hill (1998)

    Google Scholar 

  10. Borhani, K., Beck, B., Haggard, P.: Choosing, doing, and controlling: implicit sense of agency over somatosensory events. Psychol. Sci. 28, 882–893 (2017).

    CrossRef  Google Scholar 

  11. Caspar, E.A., Christensen, J.F., Cleeremans, A., Haggard, P.: Coercion changes the sense of agency in the human brain. Curr. Biol. 26, 585–592 (2016).

    CrossRef  Google Scholar 

  12. Deci, E.L., Ryan, R.M.: The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol. Inq. 11, 227–268 (2000).

    CrossRef  Google Scholar 

  13. Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, 24–28 April, pp. 72–78. ACM, New York (1994)

    Google Scholar 

  14. Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000).

    CrossRef  Google Scholar 

  15. Friedman, B.: Value-sensitive design. Interactions 3(6), 16–23 (1996)

    CrossRef  Google Scholar 

  16. Friedman, B., Kahn, P.H., Borning, A., Huldtgren, A.: Value sensitive design and information systems. In: Doorn, N., Schuurbiers, D., Poel, I., Gorman, M.E. (eds.) Early Engagement and New Technologies: Opening Up the Laboratory. PET, vol. 16, pp. 55–95. Springer, Dordrecht (2013).

    CrossRef  Google Scholar 

  17. Friedman, B., Harbers, M., Hendry, D.G., van den Hoven, J., Jonker, C., Logler, N.: Eight grand challenges for value sensitive design from the 2016 Lorentz workshop. Ethics Inf. Technol. 23(1), 5–16 (2021).

    CrossRef  Google Scholar 

  18. Vereschak, O., Bailly, G., Caramiaux, B.: How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proc. ACM Hum.-Comput. Interact. 5(CSCW2), 1–39 (2021)

    CrossRef  Google Scholar 

  19. Dietvorst, B.J., Simmons, J.P., Massey, C.: Overcoming algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manage. Sci. 64(3), 1155–1170 (2018)

    CrossRef  Google Scholar 

  20. Guzman, A.L.: Making AI safe for humans: a conversation with Siri. In: Socialbots and Their Friends, pp. 85–101. Routledge(2016)

    Google Scholar 

  21. Kudina, O.: Alexa does not care. Should you? Media literacy in the age of digital voice assistants. Glimpse 20, 107–115 (2019).

    CrossRef  Google Scholar 

  22. Dahlbäck, N., Jönsson, A., Ahrenberg, L.: Wizard of Oz studies: why and how. In: Proceedings of the 1st International Conference on Intelligent User Interfaces, pp. 193–200 (1993)

    Google Scholar 

  23. Riek, L.D.: Wizard of oz studies in hri: a systematic review and new reporting guidelines. J. Hum.-Robot Interact. 1(1), 119–136 (2012)

    CrossRef  Google Scholar 

  24. Porcheron, M., Fischer, J.E., Reeves, S.: Pulling back the curtain on the wizards of Oz. Proc. ACM Hum.-Comput. Interact. 4(CSCW3), 1–22 (2021)

    CrossRef  Google Scholar 

  25. Van de Poel, I.: Embedding values in artificial intelligence (AI) systems. Mind. Mach. 30(3), 385–409 (2020)

    CrossRef  Google Scholar 

  26. Anderson, L.R.J., Luchsinger, A.: Artificial Intelligence and the Future of Humans (2018).

  27. Abbass, H.A.: Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. Cogn. Comput. 11(2), 159–171 (2019)

    CrossRef  Google Scholar 

  28. Kraus, M., Wagner, N., Minker, W.: Effects of proactive dialogue strategies on human-computer trust. In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 107–116 (2020)

    Google Scholar 

  29. Sankaran, S., Zhang, C., Funk, M., Aarts, H., Markopoulos, P.: Do I have a say? Using conversational agents to re-imagine human-machine autonomy. In: Proceedings of the 2nd Conference on Conversational User Interfaces, pp. 1–3 (2020)

    Google Scholar 

  30. Sankaran, S., Zhang, C., Aarts, H., Markopoulos, P.: Exploring peoples’ perception of autonomy and reactance in everyday AI interactions. Front. Psychol. 12, 713074 (2021)

    CrossRef  Google Scholar 

  31. Zhang, C., Sankaran, S., Aarts, H.: A functional analysis of personal autonomy: how restricting ‘what’, ‘when’ and ‘how’ affects experienced agency and goal motivation. Eur. J. Soc. Psychol. 53(3), 567–584 (2023)

    CrossRef  Google Scholar 

  32. High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI. European Commission (2019)

    Google Scholar 

  33. European Commission: White Paper on Artificial Intelligence: a European approach to excellence and trust 2020.

  34. Ray, C., Mondada, F., Siegwart, R.: What do people expect from robots? In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, pp. 3816–3821(2008).

  35. Dautenhahn, K., et al.: What is a robot companion - friend, assistant or butler? In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada,, pp. 1192–1197 (2005).

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Shakila Shayan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shayan, S. et al. (2023). AI in the Human Loop: The Impact of Differences in Digital Assistant Roles on the Personal Values of Users. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14144. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42285-0

  • Online ISBN: 978-3-031-42286-7

  • eBook Packages: Computer ScienceComputer Science (R0)