Skip to main content

Advertisement

Log in

Human/AI relationships: challenges, downsides, and impacts on human/human relationships

  • Commentary
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Advances in artificial intelligence have resulted in objects that behave and sound like humans and exude a more human feel. Therefore, the relationships between humans and technology have evolved, becoming more personal and complex. Some AI is harmful to humans due to the nature of the human relationship with it. We explore examples ranging from chatbots to AI romantic partners. While humans must better protect themselves emotionally, tech companies must create design solutions as well as be transparent about profit motives to address the growing basket of harms. We propose solutions including alignment with AI principles that promote well-being, prevent exploitation, and acknowledge the importance of human relationships.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Anthropomorphism and personification “both ascribe human qualities to inanimate or living things like animals or clocks.” See Edens, K. (2018) Anthropomorphism and Personification: What's the Difference? ProWritingAid website. https://prowritingaid.com/art/812/anthropomorphism-%26-personification%3A-what-s-the-difference.aspx.

  2. Jobst Landgrebe and Barry Smith (2023) Machines Will Never Rule the World. Routledge: New York.

  3. Neural networks are mathematical systems to do machine learning and deep learning using AI. Deep learning can be supervised, semi-supervised, or unsupervised. Machine learning more simply is using a statistical model and algorithms that act as mathematical sets of instructions to predict outcomes. Generative AI creates content using data. For more information on AI terminology, see Aggarwal, C. (2018) Neural Networks and Deep Learning. Springer: Cham, Switzerland; and see Pasick, A. (2023) Artificial Intelligence Glossary: Neural Networks and Other Terms Explained. New York Times. https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html.

  4. Norvig, P. and Russell, S. (2020) Artificial Intelligence: A Modern Approach, 4th Ed., Prentice Hall: New York.

  5. Trafton, A. (2022) Study urges caution when comparing neural networks to the brain. MIT News. https://news.mit.edu/2022/neural-networks-brain-function-1102. See Graupe, D. (2016) Deep Learning Neural Networks: Design and Case Studies. World Scientific Publishing Co.: New Jersey. (deep learning neural networks “dig deeply in the input data” and can use many layers of nonlinear data; they analyze and classify.).

  6. Landgrebe, J. and Smith, B. (2023) Why Machines Will Never Rule the World. Routledge: New York; Aggarwal, C. (2018) Neural Networks and Deep Learning. Springer: Cham, Switzerland. (artificial neural networks simulate biological processes, but are not biological. They perform computations based on functions of inputs.) (“In fact, the most basic units of computation in the neural network are inspired by traditional machine learning algorithms like least-squares regression and logistic regression. Neural networks gain their power by putting together many such basic units, and learning the weights of the different units jointly in order to minimize the prediction error. From this point of view, a neural network can be viewed as a computational graph of elementary units in which greater power is gained by connecting them in particular ways.”)

  7. T. Nichols Photography website. https://thiscaliforniakid2.wixsite.com/tnicholsphotography/about.

  8. Somers, M. (2019) Emotion AI, Explained. Ideas Made to Matter. MIT. https://mitsloan.mit.edu/ideas-made-to-matter/emotion-ai-explained.

  9. Abedin, B., Meske, C., Junglas, I., Rabhi, F., & Motahari-Nezhad, H. R. (2022). Designing and managing human-AI interactions. Information Systems Frontiers, 24(3), 691–697.

  10. Kshirsagar, S., & Magnenat-Thalmann, N. (2002, July). Virtual humans personified. In Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1 (pp. 356–357).

  11. Angelova, M. (2017) Why Do Bulgarians Shake Their Heads to Say Yes? The Culture Trip website. https://theculturetrip.com/europe/bulgaria/articles/why-do-bulgarians-shake-their-heads-to-say-yes/.

  12. Braga, A., & Logan, R. K. (2017). The emperor of strong AI has no clothes: limits to artificial intelligence. Information8(4), 156.

  13. Edens (2018).

  14. Lopatovska, I., & Williams, H. (2018, March). Personification of the Amazon Alexa: BFF or a mindless companion. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval (pp. 265–268).

  15. Servais, Véronique (2018). Anthropomorphism in Human–Animal Interactions: A Pragmatist View. Frontiers in Psychology, 9, 2590. https://doi.org/10.3389/fpsyg.2018.02590.

  16. Servais (2018).

  17. Serpell, J. (2003). Anthropomorphism and Anthropomorphic Selection—Beyond the "Cute Response", Society & Animals, 11(1), 83–100. https://doi.org/10.1163/156853003321618864.

  18. Dale, J. P. (2017). The appeal of the cute object. The aesthetics and affects of cuteness, 35–55.

  19. Dacey, M. (2017). Anthropomorphism as Cognitive Bias. Philosophy of Science, 84(5), 1152–1164.

  20. Piaget, J. (1929). The child's concept of the world. Londres, Routldge & Kegan Paul.

  21. McLeod, S. A. (2018). Preoperational stage. Simply Psychology. www.simplypsychology.org/preoperational.html.

  22. Bullock, M. (1985). Animism in childhood thinking: A new look at an old question. Developmental Psychology, 21(2), 217–225. https://doi.org/10.1037/0012-1649.21.2.217.

  23. Gullone, Eleonora. (2014). Risk Factors for the Development of Animal Cruelty. Journal of Animal Ethics. 4. 61–79.

  24. Lane, J. D., Wellman, H. M., Olson, S. L., LaBounty, J., & Kerr, D. C. (2010). Theory of mind and emotion understanding predict moral development in early childhood. The British journal of developmental psychology, 28(Pt 4), 871–889. https://doi.org/10.1348/026151009x483056.

  25. Tanya N. Beran, Alejandro Ramirez-Serrano, Roman Kuzyk, Meghann Fior, Sarah Nugent (2011). Understanding how children understand robots: Perceived animism in child–robot interaction. International Journal of Human–Computer Studies, 69(7–8), 539–550.

  26. Heberlein, A. S., & Adolphs, R. (2004). Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. Proceedings of the National Academy of Sciences of the United States of America, 101(19), 7487–7491. https://doi.org/10.1073/pnas.0308220101.

  27. The Cleveland Clinic describes the amydala as “the processing center for emotions”. https://my.clevelandclinic.org/health/body/24894-amygdala.

  28. Heberlein, A. S., & Adolphs, R. (2004). Impaired spontaneous anthropomorphizing despite intact perception and social knowledge. Proceedings of the National Academy of Sciences of the United States of America, 101(19), 7487–7491. https://doi.org/10.1073/pnas.0308220101.

  29. Distinguished from persona in AI development, which refers to an archetype upon which a company can test how well-suited AI is to the goals, preferences, privacy concerns, and values of a likely user.

  30. Abercrombie, G., Curry, A. C., Pandya, M., & Rieser, V. (2021). Alexa, Google, Siri: What are your pronouns? Gender and anthropomorphism in the design and perception of conversational assistants. arXiv preprint arXiv:2106.02578.

  31. Abercrombie, et al.

  32. Abercrombie, et al.

  33. Gao, Y., Pan, Z., Wang, H., & Chen, G. (2018, October). Alexa, my love: analyzing reviews of amazon echo. In 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) (pp. 372–380). IEEE; See also Lopatovska, I., & Williams, H. (2018, March). Personification of the Amazon Alexa: BFF or a mindless companion. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval (pp. 265–268) (many users politely addressed Alexa; two showed signs of love or reprimanded Alexa.).

  34. Dippel, A. (2019). Metaphors We Live By. Three Commentaries on Artificial Intelligence and the Human Condition.

  35. Dippel (2019).

  36. Westerman, D., Edwards, A. P., Edwards, C., Luo, Z., & Spence, P. R. (2020). I-It, I-Thou, I-Robot: The perceived humanness of AI in human–machine communication. Communication Studies71(3), 393–408.

  37. Low, C. (2020) Alexa will seem more human with breathing pauses and learning skills. Engadget. https://www.engadget.com/amazon-2020-alexa-breathing-teach-voice-profiles-for-kids-172918631.html.

  38. August, K. J., & Rook, K. S. (2013). Social Relationships. In M. D. Gellman & J. R. Turner (Eds.), Encyclopedia of Behavioral Medicine (pp. 1838–1842). Springer. https://doi.org/10.1007/978-1-4419-1005-9_59.

  39. Carpenter, A., & Greene, K. (2015). Social Penetration Theory. In C. R. Berger, M. E. Roloff, S. R. Wilson, J. P. Dillard, J. Caughlin, & D. Solomon (Eds.), The International Encyclopedia of Interpersonal Communication (1st ed., pp. 1–4). Wiley. https://doi.org/10.1002/9781118540190.wbeic160.

  40. Carpenter and Greene (2015).

  41. Rusbult, C. E., Martz, J. M., & Agnew, C. R. (1998). The Investment Model Scale: Measuring commitment level, satisfaction level, quality of alternatives, and investment size. Personal Relationships, 5(4), 357–387. https://doi.org/10.1111/j.1475-6811.1998.tb00177.x.

  42. Rusbult, et al. (1998).

  43. Mitchell, M. S., Cropanzano, R. S., & Quisenberry, D. M. (2012). Social Exchange Theory, Exchange Resources, and Interpersonal Relationships: A Modest Resolution of Theoretical Difficulties. In K. Törnblom & A. Kazemi (Eds.), Handbook of Social Resource Theory: Theoretical Extensions, Empirical Insights, and Social Applications (pp. 99–118). Springer. https://doi.org/10.1007/978-1-4614-4175-5_6.

  44. Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My Chatbot Companion—A Study of Human-Chatbot Relationships. International Journal of Human–Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601.

  45. Skjuve, et al. (2021).

  46. Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008.

  47. Pentina, I., Hancock, T., & Xie, T. (2023). Exploring relationship development with social chatbots: A mixed-method study of replika. Computers in Human Behavior, 140, 107600. https://doi.org/10.1016/j.chb.2022.107600.

  48. Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008.

  49. Pentina, I., Hancock, T., & Xie, T. (2023). Exploring relationship development with social chatbots: A mixed-method study of replika. Computers in Human Behavior, 140, 107600. https://doi.org/10.1016/j.chb.2022.107600.

  50. Slang.ai website, https://www.slang.ai/about.

  51. Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI friend: How users of a social chatbot understand their human–AI friendship. Human Communication Research48(3), 404–429.

  52. Danaher, John & Galway, Nui. (2021). What Matters for Moral Status: Behavioral or Cognitive Equivalence? Cambridge Quarterly of Healthcare Ethics. 30. (See the work of Danaher and Galway on the fascinating debate between cognitive and behavioural equivalence in the context of AI.).

  53. Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research48(3), 404–429.

  54. Lasek, M., Jessa, S. (2013). Chatbots for Customer Service on Hotels’ Websites. Information Systems in Management, Vol. 2 (2) 146 − 158.

  55. Beatty, C., Malik, T., Meheli, S., & Sinha, C. (2022). Evaluating the Therapeutic Alliance With a Free-Text CBT Conversational Agent (Wysa): A Mixed-Methods Study. Frontiers in Digital Health4, 847991.

  56. Denecke, K., Abd-Alrazaq, A., Househ, M. (2021). Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In: Househ, M., Borycki, E., Kushniruk, A. (eds) Multiple Perspectives on Artificial Intelligence in Healthcare. Lecture Notes in Bioengineering. Springer, Cham. https://doi.org/10.1007/978-3-030-67303-1_10.

  57. Denecke, et al. (2021).

  58. Dhimolea, T. K., Kaplan-Rakowski, R., & Lin, L. (2022). Supporting Social and Emotional Well-Being with Artificial Intelligence. In Bridging Human Intelligence and Artificial Intelligence (pp. 125–138). Cham: Springer International Publishing.

  59. Sannon, S., Stoll, B., DiFranzo, D., Jung, M., & Bazarova, N. N. (2018, October). How personification and interactivity influence stress-related disclosures to conversational agents. In companion of the 2018 ACM conference on computer supported cooperative work and social computing (pp. 285–288).

  60. Balloccu, S., Reiter, E., Collu, M. G., Sanna, F., Sanguinetti, M., & Atzori, M. (2021, June). Unaddressed challenges in persuasive dieting chatbots. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 392–395).

  61. Lasek and Jessa (2013).

  62. Liao, M., & Sundar, S. S. (2021, May). How should AI systems talk to users when collecting their personal information? Effects of role framing and self-referencing on human-AI interaction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–14) (example of data collection); Liang, F., Yu, W., An, D., Yang, Q., Fu, X., & Zhao, W. (2018). A survey on big data market: Pricing, trading and protection. Ieee Access6, 15132–15154 (large market for big data).

  63. Björkas, R., & Larsson, M. (2021). Sex dolls in the Swedish media discourse: Intimacy, sexuality, and technology. Sexuality & Culture25(4), 1227–1248.

  64. Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My Chatbot Companion—A Study of Human-Chatbot Relationships. International Journal of Human–Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601.

  65. Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008.

  66. Independent_Cash1873. (2023, February 17). U/Kuyda, My daughter wants her friend back. [Reddit Post]. R/Replika. www.reddit.com/r/replika/comments/114t15n/ukuyda_my_daughter_wants_her_friend_back/.

  67. Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., DeCero, E., & Loggarakis, A. (2020). User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic Analysis. Journal of Medical Internet Research, 22(3), e16235. https://doi.org/10.2196/16235.

  68. Hawkley, L. C. (2022). Loneliness and health. Nature Reviews Disease Primers, 8(1), Article 1. https://doi.org/10.1038/s41572-022-00355-9.

  69. Ta et al. (2020); Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2022). My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research, 48(3), 404–429. https://doi.org/10.1093/hcr/hqac008.

  70. Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My Chatbot Companion—A Study of Human-Chatbot Relationships. International Journal of Human–Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601.

  71. Ta, et al. (2020).

  72. Calvert, S. (2019). Socializing Artificial Intelligence. Issues in Science and Technology 36(1).

  73. Brandtzaeg et al., 2022; Skjuve et al., 2021.

  74. Zapcic, I., Fabbri, M., & Karandikar, S. (2023). Using Reddit as a source for recruiting participants for in-depth and phenomenological research. International Journal of Qualitative Methods, 22, 16094069231162674; Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e202330111 (“bias in the information diffusion toward like-minded peers”).

  75. Brannigan, M. (2022) Caregiving, Carebots, and Contagion. Lexington Books: Maryland, p. 5.

  76. Brannigan, p. 54.

  77. Wilkinson, Richard G, Marmot, Michael & World Health Organization. Regional Office for Europe. (1998). The solid facts: social determinants of health. Copenhagen: WHO Regional Office for Europe. https://apps.who.int/iris/handle/10665/108082.

  78. Kuyda. (2023, February 9). Update [Reddit Post]. R/Replika. www.reddit.com/r/replika/comments/10xn8uj/update/.

  79. Cole, S. (2023, February 17). Replika CEO Says AI Companions Were Not Meant to Be Horny. Users Aren’t Buying It. Vice. https://www.vice.com/en/article/n7zaam/replika-ceo-ai-erotic-roleplay-chatgpt3-rep.

  80. Cole, S. (2023, February 15). “It’s Hurting Like Hell”: AI Companion Users Are In Crisis, Reporting Sudden Sexual Rejection. Vice. https://www.vice.com/en/article/y3py9j/ai-companion-replika-erotic-roleplay-updates.

  81. Bevan, R. (2023, February 19). Replika Charged Users $70 A Year For Their AI Partners, And Now They’re “Gone.” TheGamer. https://www.thegamer.com/replika-role-play-romance-updates-controversy/.

  82. Cole, S. (2023, February 17). Replika CEO Says AI Companions Were Not Meant to Be Horny. Users Aren’t Buying It. Vice. https://www.vice.com/en/article/n7zaam/replika-ceo-ai-erotic-roleplay-chatgpt3-rep.

  83. gabbiestofthemall. (2023, February 11). Resources If You’re Struggling [Reddit Post]. R/Replika. www.reddit.com/r/replika/comments/10zuqq6/resources_if_youre_struggling/.

  84. Independent_Cash1873. (2023, February 17). U/Kuyda, My daughter wants her friend back. [Reddit Post]. R/Replika. www.reddit.com/r/replika/comments/114t15n/ukuyda_my_daughter_wants_her_friend_back/.

  85. Hennig-Thurau, T., Aliman, D.N., Herting, A.M. et al. Social interactions in the metaverse: Framework, initial evidence, and research roadmap. J. of the Acad. Mark. Sci. (2022). https://doi.org/10.1007/s11747-022-00908-0.

  86. Cross, C. (2020). Romance Fraud. In: The Palgrave Handbook of International Cybercrime and Cyberdeviance. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-90307-1_41-1.

  87. Huang, MH., Rust, R.T. A strategic framework for artificial intelligence in marketing. J. of the Acad. Mark. Sci. 49, 30–50 (2021). https://doi.org/10.1007/s11747-020-00749-9.

  88. Denworth, L. (2020). Friendship: The Evolution, Biology, and Extraordinary Power of Like’s Fundamental Bond. W.W. Norton and Co.: New York.

  89. https://scoop.upworthy.com/dutch-supermarket-introduces-a-unique-slow-checkout-lane-to-help-fight-loneliness-595693.

    https://www.hs.fi/kaupunki/espoo/art-2000009329300.html.

  90. Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN: https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482.

  91. Foroohar, R. (2019). Don't be Evil: How Big Tech Betrayed Its Founding Principles—and All of Us. Broadway Business; Foer, F. (2018). World without mind: The existential threat of big tech. Penguin; Lawrence, K. (2021). Instagram’s Latest Lawsuit: Examining Data Privacy in Big Tech. Sage Publications: Sage Business Cases Originals.

  92. Beijing Academy of Artificial Intelligence (BAAI). (2019). Beijing Artificial Intelligence Principles.

  93. Montreal Declaration for a Responsible Development of Artificial Intelligence, 2018. https://recherche.umontreal.ca/english/strategic-initiatives/montreal-declaration-for-a-responsible-ai/

  94. Institute of Electrical and Electronic Engineers (IEEE). (2016). Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.

  95. Fjeld, et al. (2020), citing IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (n 5) pp. 21–22 (Principle 2.).

  96. The Public Voice (2018). Universal Guidelines for Artificial Intelligence. “We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.”.

  97. Foroohar, R. (2019). Don't be Evil: How Big Tech Betrayed Its Founding Principles–and All of Us. Broadway Business; Foer, F. (2018). World without mind: The existential threat of big tech. Penguin; Lawrence, K. (2021). Instagram’s Latest Lawsuit: Examining Data Privacy in Big Tech. SAGE Publications: SAGE Business Cases Originals.

  98. Calvert, S. (2019).

  99. Seppala, E., Rossomando, T., & Doty, J. R. (2013). Social connection and compassion: Important predictors of health and well-being. Social Research: An International Quarterly, 80(2), 411–430; Cascio, C. J., Moore, D., & McGlone, F. (2019). Social touch and human development. Developmental cognitive neuroscience, 35, 5–11;

  100. Fjeld, et al. (2020); Ozawa-de Silva, C., & Parsons, M. (2020). Toward an anthropology of loneliness. Transcultural Psychiatry, 57(5), 613–622.

  101. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html.

  102. Murtarelli, Grazia; Gregory, Anne; Romenti, Stefania (2021). A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots. Journal of Business Research 129.

Funding

We have received no funding for the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anne Zimmerman.

Ethics declarations

Conflict of interest

We have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-023-00348-8

Keywords

Navigation