Skip to main content

Advertisement

Log in

Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence

  • Original Paper
  • Published:
Digital Society Aims and scope Submit manuscript

Abstract

Ethically significant consequences of artificially intelligent artifacts will stem from their effects on existing social relations. Artifacts will serve in a variety of socially important roles—as personal companions, in the service of elderly and infirm people, in commercial, educational, and other socially sensitive contexts. The inevitable disruptions that these technologies will cause to social norms, institutions, and communities warrant careful consideration. As we begin to assess these effects, reflection on degrees and kinds of social agency will be required to make properly informed decisions concerning the deployment of artificially intelligent artifacts in important settings. The social agency of these systems is unlike a human social agency, and this paper provides a methodological framework that is more suited for inquiry into artificial social agents than conventional philosophical treatments of the concept of agency. Separate aspects and dimensions of agency can be studied without assuming that agency must always look like adult human agency. This revised approach to the agency of artifacts is conducive to progress in the topics studied by AI ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

We do not analyze or generate any datasets, because our work proceeds within a theoretical and mathematical approach.

Notes

  1. Sam Altman, head of OpenAI recently tweeted “i am a stochastic parrot, and so r u”. https://twitter.com/sama/status/1599471830255177728?lang=en (Dec 22, 2022).

  2. There is increased interest in Confucian approaches to these questions, see for example Zhu, 2020 that engages with the effects of technology on social roles as traditionally conceived in Chinese thought.

  3. https://workspace.google.com/blog/product-announcements/duet-ai-in-workspace-now-available (last accessed August 29 2023).

  4. A folk psychological conception of agency detection along the lines Dennett describes in The Intentional Stance (Dennett, 1987) will be of little assistance in cases where we find ourselves devoting energy and attention to determining the nature of the beings with whom we are talking and interacting. The challenge here is that unaided common sense is not equipped to detect agent behavior in suitably sophisticated AI.

  5. Those whose moral framework involves an individualistic focus on personal utility might regard social harms as irrelevant or secondary. However, we will assume for the sake of this paper that a radical form of subjectivism with respect to moral matters is either self-undermining or that there are indirect individualist reasons to care about social goods and harms. We are grateful to an anonymous referee for forcing us to be clear on this point.

  6. We are grateful to an anonymous referee for pressing us on this issue and for encouraging us to discuss AI systems that have forms of non-linguistic social agency.

  7. See Gonzalez-Gonzalez et al., 2021 for a systematic review of the scientific literature on sexbots. See also the 2022 special issue of The Journal of Future of Robot Life edited by Simon Dube and David Levy on robot sex. Other notable discussions include David Levy’s, 2007 book Love and Sex with Robots.

  8. Some of these questions are touched upon in Ruiping and Cherry, eds., 2021. Adshade (2017) discusses the economic aspects of social change involving robot sex.

  9. There is a large and growing literature debating the ethics of various kinds of paraphilia and pedophilia as expressed with robots, see for example Jecker (2021), Karaian (2022), Marečková et al. (2022), and Sparrow (2021).

  10. Under these circumstances, our desires to engage in degrading, violent, or simply obnoxious sexual encounters with others or the desire for fully compliant or idealized partners could be acted upon without those desires being brought into question or challenged by the vulnerability and needs of another human person. Of course, for some, the absence of a real human person would make it impossible to genuinely satisfy some obnoxious sexual desires given the interpersonal nature of that desire. Sadism, for example, involves the subordination of another human person. It is hard to imagine a sadist enjoying torturing his sex robot for very long, no matter how realistic the robot’s expressions of pain might be given the absence of coercion or subordination.

  11. Traditionally, adult-level human linguistic competence provided a key benchmark for twentieth-century philosophers as they considered the questions of intelligence, agency, and moral standing. Chatbots that run on state-of-the-art LLMs now have the capacity to pass for human interlocutors under certain circumstances, and thus—in the spirit of the Turing test—we are forced to reflect on their level of agency and perhaps even on their moral status. In this paper, we will focus on the question of their agency.

  12. We agree with one of our referees who noted that AI researchers do not simply assume that AI has agency but also presume that the goal of AI is the creation of agents of a certain kind.

  13. According to Wooldridge and Jennings, it was not until the 1980s that the concept of agency received much attention from technologists. They note that “the problem [was] that although the term [was] widely used by many people working closely in related areas, it defied attempts to produce a single universally accepted definition” (Wooldridge & Jennings, 1995, 4). Of course, science fiction has a long history of reflection on the idea of artifacts as agents.

  14. For an informative analysis on how people perceive dogs versus robots as companions.

  15. For a detailed overview of the philosophical literature, see Ferrero, The Routledge Handbook of Philosophy of Agency, (2022). For another recent overview on action, see Paul, Philosophy of Action: A Contemporary Introduction, (2021).

  16. Silver et al. (2022) also recognize that social agency is best modeled multidimensionally. Although their model primarily tracks level of cooperation between agents, they note, “there are many interactions dimensions critically under researched in relation to Social Agency, and whilst this [their rendition] continuum is centered around the degree of cooperation in an interaction, as Social Agency grows as a field, it is hoped that more key elements will be incorporated into this model” (442).

  17. For an introduction on the issue, see Nyholm (2023), especially chapter 6.

  18. For an overview of the logic of threshold arguments in the study of cognition, see Calvo and Symons (2014).

  19. Of course, those who hold the threshold account might retreat to some kind of instrumentalist conception of artifact agency. We can certainly act as though an artifact is an agent for instrumental reasons in the spirit of Dennett’s intentional stance (see Symons (2001) and Dennett (1987)), but given this version of the threshold view, we cannot ascribe agency to artifacts like chatbots independently of an observer’s ascription of agency. We will return to this option below.

  20. One issue with this is that as highlighted by Silver et al. (2022, 449), several psychological studies have demonstrated that joint action or joint agency is difficult to justify between robots and humans. Humans tend to not think or report a sense of joint agency when collaborating with robots. Also see Nylhom (2023): Nylhom spends nearly an entire chapter (chapter 3) in his book (2023) on the various moral issues and approaches for autonomous vehicle. Also, see chapter 4 of the same book for further debates on autonomous cars.

  21. Floridi and Sanders also make a point to underscore the difficulty of holding humans responsible for computing systems (AI, regular software, and so on), features, or actions unforeseeable by humans (2004, 371–372) (CITE), like our example of the ABS system in cars.

  22. Also, see chapter 2 of Nyholm (2020).

  23. We thank an anonymous referee for encouraging us to distinguish between moral agency and agency per se.

  24. Consider what van Hateren says concerning conditions required for minimal agency, “such conditions should indicate which species have agency and which behaviors are acts [emphasize ours] rather than something else (… such as sneezing, shivering [automatic reflexes]”.

  25. Debates around group agency are also worth noting here. Groups per se lack any representational content or reflective thought but do seem to take actions which, at least, seem irreducible to individual members (parliament voted to do X). For informative and contrasting view on group agency, see Lewis-Martin (2022). It is worth noting that some philosophers (Christian List, 2021) have characterized AI agency as similar to group agency—List argues that AIs are agents by drawing parallels with group agency. Group agency is a contentious topic, and nothing in our current argument rests on accepting it. We mention it here to note the possibility of agency without intentionality or at least without intentionality in the conventional sense.

  26. For example, when a user engaged with a therapy agent in a conversation, for the human, even if they know the interlocuter is an AI, the perception of the outputs of the chatbots for the user is perceived as conversational actions. Take the example by Yang (2020); the user says to a chatbot “Hey, I know you are not real, but I just wanted to send these pictures of my family out at Disneyland having a great time. I’m doing better now. Thank you” (35). The user seems to take the chatbot as an agent worthy of respect that they should be polite and share intimate family details with. Another example is the language use around ChatGPT or midjourney. It is common to see headlines or conversations that have language like, “what does chatGPT think X is, or this is what AI thinks people from Y country look like.” A person in Korea legally married a virtual avatar (Jozuka et al., 2018). Robotic animals, like Paro, have been around for a while, or for our case, the ChatGPT induced pet bots like Loona. One final example to demonstrate the inclusion of AI systems like chatbots. Take the prevalence of friendbots like Replika. During the pandemic, reports of using chatbots like Replika for therapeutic reasons (Weber-Guskar, 2022) were up. As mentioned, there is a growing acceptance of using chatbots or LLM-equipped robots as sexbots.

  27. One of our referees noted that it might be helpful to think of the social by reference to Floridi’s concept of levels of abstraction (LoA) (2006, Floridi & Sanders, 2004). By using abstraction, one can further clarify a particular phenomenon or artifact of inquiry by focusing on one set of properties or detail over another set. Usually, one set is more abstract than the other. This permits researchers to focus on a particular aspect of the inquiry for different purposes or to be more explicit about the goals of particular explanations. Floridi puts it as follows: consider the wine example. Different LoAs may be appropriate for different purposes. To evaluate a wine, the “tasting LoA,” consisting of observables like those mentioned in the previous section, would be relevant. For the purpose of ordering wine, a “purchasing LoA” (containing observables like maker, region, vintage, supplier, quantity,and price) would be appropriate, but here, the “tasting LoA” would be irrelevant. In our case, we can focus on the social LoA: the level of conversations between two entities and the socio-linguistic world.

  28. Here, the conditions governing the individuality (rather than the identification) of the artifact come into play. Here, see Symons (2010) for a discussion of the individuality of artifacts and organisms.

  29. Like Barandiarian et al.’s, conditions for minimal agency, Floridi and Sanders (2004) also provide base conditions for agency—(a) interactivity, responds to environmental stimuli; (b) autonomy, governs its behavior independent of environmental stimuli; and (c) adaptability, modify its past system states and transition rules according to the environment taking into account success and failure of task (357- 358, 363–364). These conditions are similar Barandiarian et al.’s. Autonomy is similar to the individuality, adaptability and interactivity have parallels with interactional asymmetry, and adaptability is akin to normativity (success or failure at achieving normative goals). Of course, these conditions are not exact replicates. Also, like Floridi and Sanders, we highlight the importance of LoA for chatbot agency. Chatbots are best understood as agents when viewed at the social or linguistic LoA. Although Floridi and Sanders differentiate between agency and moral agency, their ultimate goal is to establish moral agency for AI systems by first showing their agential status.

  30. Shanahan (2023) underscores this point for LLMs, as he says, “a bare bone LLM [for instance] doesn’t really know anything because all it does, at a fundamental level, is a sequence prediction” (2023, 5). So, although it is tempting to ascribe intentionality, beliefs, and desires to these systems, it is a mistake. For Dennett, the intentional stance was understood to be an adaptive trait to specific environmental and evolutionary pressures. In this sense, we are “right” to ascribe beliefs and intentions to aspects of the world that evolution shaped us to detect. See AUTHOR 20?? for a discussion of the relationship between the appropriateness of taking the intentional stance and Dennett’s skepticism with respect to realism about representations and intentions.

  31. Not all chatbots are deliberately deceptive in this respect. In 2022, Sparrow AI from Deepmind was explicitly built to avoid this kind of deceptive action in relation to users. Their working paper provides a detailed description of the heuristics that they employed to guide their chatbot (The Sparrow Team, 2022).

  32. Nyholm shares our criticism of demanding accounts of agency and seems to endorse some version of the view we defend here (20182023).

  33. Similarly, van Lingen et al. (2023) also affirm threshold approaches and slip between moral agency and agency simplciter. For example, Blinkley and Pilkington say that to be a minimal agent is “to simply [perform an] intentional action” (2023, 25), and van Lingen differentiates between strong and weak AI (2023). For van Lingen, chatbots are weak AI. Strong AI can have phenomenal experiences, but weak AI cannot; therefore, weak AI is not a moral agent (22). Furthermore, weak AI, for Lingen, cannot act without human actors; thus, they cannot be agents (23). Some, like Huber, take a different approach. Huber suggests that the pragmatic benefit of AI is more important than whether they are actual agents or not. Lastly, Holohan et al. (2023) suggest that agency in therapeutic contexts emerges as a result of the relationship between chatbots and patient (15).

  34. Glock provides an overview of the reasons philosophers deny that animals act. The primary basis is the claim that animals do not act in virtue of reasons (Glock, 2019, 667).

References

Download references

Acknowledgements

We gratefully acknowledge the excellent feedback from two anonymous referees. Conversations with the AI ethics graduate seminar at the University of Kansas in 2021 formed the basis for this project in addition to helpful discussions with Ramon Alvarado, Oluwaseun Sanwoolu, Oluwakorede Ajibona, Francisco Pipa, Luciano Floridi, Jack Horner, Amir Modarresi, John Sullins, and Caroline Arruda.

Funding

Ripple,U.S. Department of Defense, H98230-23-C-0277, John Symons.

Author information

Authors and Affiliations

Authors

Contributions

The authors equally contributed to this study.

Corresponding author

Correspondence to John Symons.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Symons, J., Abumusab, S. Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence. DISO 3, 2 (2024). https://doi.org/10.1007/s44206-023-00086-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s44206-023-00086-8

Keywords

Navigation