Skip to main content
Log in

The Moral Status of Social Robots: A Pragmatic Approach

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile these two positions. The core of this proposal is that moral individualism and moral relationalism are best understood as distinct deliberative strategies for attributing moral status to SRs, and that both are worth preserving insofar as they answer to different kinds of practical problems that we face as moral agents.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Following Kate Darling, I take a social robot to be “a physically embodied, autonomous agent that communicates and interacts with humans on a social level (Darling 2016, 2). Whether this definition includes chatbots and other large language models is an open question (given that these technologies are technically embodied in hardware). The pragmatic approach to moral status developed in this paper could, in principle, apply to these cases as well. Although I shall focus primarily on SRs given their centrality in recent debates.

  2. For a general discussion of the potential impact of robots within our lives, see Bostrom (2014), Darling (2016; 2021), Nørskov (2016). For a discussion of the economic impact of integrating robots into the workplace, see Ford (2015), Danaher (2017). Some writers have considered features of human–robot relations, especially sexual and romantic relations with robots (Danaher 2019; Frank and Nyholm 2017; Jecker 2021a; McArthur 2017), but also friendship (Jecker 2021b; Marti 2010) and care-giving (Sharkey and Sharkey 2010). Since the European Parliament’s Committee on Legal Affairs issued a 2017 report proposing the creation of the category of “electronic personhood,” there has been considerable discussion of the legal status of social robots. For an overview of this debate, see Parviainen and Coeckelbergh (2021).

  3. In addition to the parent–child relationship, other relationships that have been used to analogize the human–robot relation include the employee-employer relation, or the god-creature relation.

  4. In this paper, I take the term moral status to be synonymous with ‘moral standing’, ‘moral considerability’ and ‘moral patienthood’.

  5. For related characterizations of the concept of moral status, see Warren (1997), DeGrazia (2008, 183), DiSilvestro (2010, 12), and Harman (2003, 174). Jaworska and Tannenbaum (2013) offer an overview of the literature on the grounds of moral status.

  6. James Rachels describes moral individualism as “a thesis about the justification of judgements concerning how individuals may be treated. The basic idea is that how an individual may be treated is to be determined, not by considering his group memberships, but by considering his own particular characteristics” (Rachels 2005, 173).

  7. Recent exceptions include Gordon (2021) and Gordon and Gunkel (2022). I discuss their view below in Section 3.

  8. Proponents of this view include Coeckelbergh (2010, 2014, 2018, 2022a), Gordon (2021, 2022a), and Gunkel (2011, 2014, 2018).

  9. Some writers take the issue to be whether social robots will ever possess status-conferring properties, whereas others focus more on the question of whether social robots will likely soon possess those properties.

  10. In admitting that social properties are status conferring, these authors are not thereby committing themselves to a relationalist view. As I shall explain below, MR is not (necessarily) the idea that moral status is conferred by virtue of relationships. Rather it consists in both a negative (anti-individualist) dimension as well as a positive dimension, which suggests that moral status ascriptions should be arrived at through various forms of critical reflection.

  11. For a discussion of the likelihood that these technologies will be available in the (relatively) near future, see Bostrom (2014).

  12. For critical discussions of Floridi’s information ethics—especially as it bears on the questions of robot rights—see Brey (2008), Coeckelbergh (2010, 217), Gunkel (2014, 122–6) and Mosakas (2021, 436–8).

  13. See for example Andreotta (2021), Mosakas (2021), Müller (2021), and Veliz (2021). For skeptical arguments grounded in the adverse social implications of ascribing moral status to SRs see Turkle (2011) and Bryson (2009). Relatedly, others have argued that, in principle, SRs are incapable of being moral agents (Sparrow 2021).

  14. See also Mosakas (2021, 431).

  15. For a discussion, see Chalmers (1995).

  16. Andreotta does not find this line of argument convincing given its reliance on intuitions about cases for which we have no empirical support (Andreotta 2021, 27–8).

  17. The first, “AI Consciousness Test” is meant to serve as a sufficient, but not necessary condition for determining consciousness (Schneider 2019, 50). It attempts to “challenge an AI with a series of increasingly demanding natural language interactions to see how readily it can grasp and use concepts based on the internal experiences we associate with consciousness” (51). For critical discussions of Schneider’s tests, see Andreotta (2021).

  18. Neely argues that it is possible for AI to have interests even if they lack phenomenal consciousness, and that this suffices for their having moral status (Neely 2014).

  19. Another example of the inferential indeterminacy would be the question of whether consciousness is conceptually separable from notions such as intelligence or rationality. Andreotta argues that these notions are independent of one another, such that it is possible to have an intelligent machine that is not phenomenally conscious (Andreotta 2021, 22–23). But one could envision someone who denied this claim of independence.

  20. This idea is, however, not limited to metaethical constructivism, but has been developed in considerable detail within other areas of philosophy—notably feminist philosophy (Lindemann 2019, chapter 4) and pragmatism (Rorty 1989). It finds empirical support from social identity theory and self-categorization theory (Jenkins 2014).

  21. The notion of “taking on face” is one that Gunkel and Coeckelbergh derive from Emmanuel Levinas.

  22. In response, scholars have recently argued that, despite its own claims to promote a critical or reflexive attitude, MR ends up perpetuating anthropocentric biases (Gordon 2022b; Sætra 2021).

  23. In claiming that relationalists require a theory of error, I do not mean to claim that appealing to status-conferring properties to ground moral status is a universal practice. Indeed, traditional sub-Saharan African and contemporary Japanese societies do not rely on individualist intuitions (Jecker and Nakazawa 2022). My claim is that insofar as relationalists deny the legitimacy of individualist justifications, they owe an explanation not only of why these justifications are mistaken, but of how such a justificatory error became so prevalent—especially within post-enlightenment Western societies.

  24. Constrained relationalism bears important similarities to other positions within the theoretical landscape. John Danaher has recently advanced a view called ethical behaviourism (EB), according to which an entity has moral status if it consistently behaves like other entities to which we ascribe moral status (Danaher 2020). Both EB and constrained relationalism are compatible with a range of views about how moral status attributions are justified (Danaher 2020, 2024). Moreover, both views prioritize normative and epistemological questions about moral status over metaphysical ones (Danaher 2020, 2027). But whereas EB focuses on justifying empirical inquiry into an entity’s behavior as a primary means of determining its moral status, constrained relationalism focuses on a broader range of deliberative strategies, including transcendental critique, sociolinguisitic analysis, phenomenological inquiry, and so on. Constrained relationalism also shares important affinities with pluriversal approaches to ethical thought (Reiter 2018). While a detailed comparison is beyond the scope of this paper, both positions are amenable to multiple legitimate methods and forms of inquiry in ethics. They are also resistant to the assumption that all ethical questions—including those about the limits of moral considerability—admit of a single answer that holds universally.

References

  • Andreotta, A. J. (2021). The hard problem of AI rights. AI & Society, 36, 19–32.

    Article  Google Scholar 

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

    Google Scholar 

  • Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 316–334). Cambridge University Press.

    Chapter  Google Scholar 

  • Brey, P. (2008). Do we have moral duties towards information objects? Ethics and Information Technology, 10, 109–114.

    Article  Google Scholar 

  • Bryson, J. J. (2009). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. John Benjamins Publishing Company.

    Google Scholar 

  • Cappuccio, M. L., Peeters, A., & McDonald, W. (2019). Sympathy for Dolores: Moral consideration for robots based on virtue and recognition. Philosophy & Technology, 33(1), 9–31.

    Article  Google Scholar 

  • Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

    Google Scholar 

  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12, 209–221.

    Article  Google Scholar 

  • Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. Palgrave Macmillan.

    Book  Google Scholar 

  • Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-Cartesian moral hermeneutics. Philosophy & Technology, 27(1), 61–77.

    Article  Google Scholar 

  • Coeckelbergh, M. (2018). Why care about robots? Empathy, moral standing, and the language of suffering. Kairos. Journal of Philosophy & Science, 20(1), 141–158.

    Article  Google Scholar 

  • Coeckelbergh, M. (2022a). Robot ethics. MIT Press.

    Book  Google Scholar 

  • Coeckelbergh, M. (2022b). The Ubuntu robot: Towards a relational conceptual framework for intercultural robotics. Science and Engineering Ethics, 28, 16.

    Article  Google Scholar 

  • Coeckelbergh, M., & Gunkel, D. J. (2014). Facing animals: A relational, other-oriented approach to moral standing. Journal of Agricultural and Environmental Ethics, 27, 715–733.

    Article  Google Scholar 

  • Danaher, J. (2017). Should we be thinking about sex robots? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications. MIT Press.

    Chapter  Google Scholar 

  • Danaher, J. (2019). The rise of the robots and the crisis of moral patiency. AI & Society, 34, 129–136.

    Article  Google Scholar 

  • Danaher, J. (2020). Welcoming robots into the moral circle: A defence of Ethical behaviourism. Science and Engineering Ethics, 26, 2023–2049.

    Article  Google Scholar 

  • Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In R. Calo, A. Michael Froomkin, & I. Kerr (Eds.), Robot law (pp. 213–231). Edward Elgar.

    Google Scholar 

  • Darling, K. (2021). The new breed: What our history with animals reveals about our future with robots. Henry Holt & Company.

    Google Scholar 

  • DeGrazia, D. (2008). Moral status as a matter of degree? The Southern Journal of Philosophy, 46(2), 181–198.

    Article  Google Scholar 

  • DiSilvestro, R. (2010). Human capacities and moral status. Springer.

    Book  Google Scholar 

  • Floridi, L. (1999). Information ethics: On the philosophical foundation of computer ethics. Ethics and Information Technology, 1, 33–52.

    Article  Google Scholar 

  • Floridi, L. (2002). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4, 287–304.

    Article  Google Scholar 

  • Ford, M. (2015). Rise of the robots: Technology and the threat of a jobless future. Basic Books.

    Google Scholar 

  • Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25, 305–323.

    Article  Google Scholar 

  • Gordon, J.-S. (2021). Artificial moral and legal personhood. AI & Society, 36, 457–471.

    Article  Google Scholar 

  • Gordon, J.-S. (2022a). Are superintelligent robots entitled to human rights? Ratio, 35, 181–193.

    Article  Google Scholar 

  • Gordon, J.-S. (2022b). The African relational account of social robots: A step back? Philosophy & Technology, 35, 49.

    Article  Google Scholar 

  • Gordon, J.-S., & Gunkel, D. J. (2022). Moral status and intelligent robots. The Southern Journal of Philosophy, 60(1), 88–117.

    Article  Google Scholar 

  • Gunkel, D. J. (2011). The machine question. MIT Press.

    Google Scholar 

  • Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology, 27, 113–132.

    Article  Google Scholar 

  • Gunkel, D. J. (2018). The other question: Can and should robots have rights? Ethics and Information Technology, 20, 87–99.

    Article  Google Scholar 

  • Harman, E. (2003). The potentiality problem. Philosophical Studies, 114, 173–198.

    Article  Google Scholar 

  • Jaworska, A., & Tannenbaum, J. (2013). The grounds of moral status. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2018 ed.) https://plato.stanford.edu/archives/spr2018/entries/grounds-moral-status/. Accessed 1 June 2020

    Google Scholar 

  • Jecker, N. S. (2021a). Nothing to be ashamed of: Sex robots for older adults with disabilities. Journal of Medical Ethics, 47, 26–32.

    Article  Google Scholar 

  • Jecker, N. S. (2021b). You’ve got a friend in me: Sociable robots for older adults in an age of global pandemics. Ethics and Information Technology, 23(Suppl 1), S35–S43.

    Article  Google Scholar 

  • Jecker, N. S., & Nakazawa, E. (2022). "Bridging east-west differences in ethics guidance for AI and robotics. AI, 3(3), 764–777.

    Article  Google Scholar 

  • Jecker, N. S., Atiure, C. A., & Ajei, M. O. (2022a). The moral standing of social robots: Untapped insights from Africa. Philosophy & Technology, 35(2), 1–22.

    Article  Google Scholar 

  • Jecker, N. S., Atiure, C. A., & Ajei, M. O. (2022b). Two steps forward: An African relational account of moral standing. Philosophy & Technology, 35(2), 38.

    Article  Google Scholar 

  • Jenkins, R. (2014). Social identity (4th ed.). Routledge.

    Book  Google Scholar 

  • Kamm, F. M. (2007). Intricate ethics: Rights, responsibilities, and permissible harm. Oxford University Press.

    Book  Google Scholar 

  • Kenny, A. (1973). Wittgenstein. Harvard University Press.

    Google Scholar 

  • Kittay, E. F. (2005). At the margins of moral personhood. Ethics, 116(1), 100–131.

    Article  Google Scholar 

  • Korsgaard, C. (1996). The sources of normativity. Cambridge University Press.

    Book  Google Scholar 

  • Korsgaard, C. (2009). Self-constitution: Agency, identity, and integrity. Oxford University Press.

    Book  Google Scholar 

  • Lindemann, H. (2019). An invitation to feminist ethics (2nd ed.). Oxford University Press.

    Book  Google Scholar 

  • Marti, P. (2010). Robot companions. Interaction Studies, 11(2), 220–226.

    Article  Google Scholar 

  • McArthur, N. (2017). The case for sexbots. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications. MIT Press.

    Google Scholar 

  • McMahan, J. (2005). Our fellow creatures. The Journal of Ethics, 9(3/4), 353–380.

    Article  Google Scholar 

  • Mosakas, K. (2021). On the moral status of social robots: considering the consciousness criterion. AI & Society, 36, 429–443.

    Article  Google Scholar 

  • Müller, V. C. (2021). Is it time for robot rights? moral status in artificial entities. Ethics and Information Technology, 23, 579–587.

    Article  Google Scholar 

  • Neely, E. L. (2014). Machines and the moral community. Philosophy & Technology, 27(1), 97–111.

    Article  Google Scholar 

  • Nørskov, M. (2016). Social Robots: Boundaries. Routledge.

    Google Scholar 

  • Parviainen, J., & Coeckelbergh, M. (2021). The political choreography of the Sophia robot: Beyond robot rights and citizenship to political performances for the social robotics market. AI & Society, 36, 715–724.

    Article  Google Scholar 

  • Rachels, J. (2005). Drawing lines. In C. R. Sunstein & M. C. Nussbaum (Eds.), Animal rights: Current debates and new directions. Oxford University Press.

    Google Scholar 

  • Reiter, B. (2018). Introduction. In B. Reiter (Ed.), Constructing the pluriverse: The geopolitics of knowledge. Duke University Press.

    Chapter  Google Scholar 

  • Rorty, R. (1989). Contingency, irony, and solidarity. Cambridge University Press.

    Book  Google Scholar 

  • Sætra, H. S. (2021). Challenging the neo-anthropocentric relational approach to robot rights. Frontiers in Robotics and AI, 8, 744426.

    Article  Google Scholar 

  • Sagoff, M. (1984). Animal liberation and environmental ethics: Bad marriage, quick divorce. Osgoode Hall Law Journal, 22(2), 297–307.

    Article  Google Scholar 

  • Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton University Press.

    Book  Google Scholar 

  • Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1), 98–119.

    Article  Google Scholar 

  • Sharkey, A., & Sharkey, N. (2010). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14, 27–40.

    Article  Google Scholar 

  • Sparrow, R. (2021). Why machines cannot be moral. AI & Society, 36, 685–693.

    Article  Google Scholar 

  • Street, S. (2012). Coming to terms with contingency: Humean constructivism about practical reason. In J. Lenman & Y. Shemmer (Eds.), Constructivism in Practical Philosophy. Oxford University Press.

    Google Scholar 

  • Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information, 9(4), 73.

    Article  Google Scholar 

  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.

    Google Scholar 

  • Véliz, C. (2021). Moral zombies: Why algorithms are not moral agents. AI & Society, 36, 487–497.

    Article  Google Scholar 

  • Wareham, C. S. (2021). Artificial intelligence and African conceptions of personhood. Ethics and Information Technology, 23(2), 127–136.

    Article  Google Scholar 

  • Warren, M. A. (1997). Moral status: Obligations to persons and other living things. Oxford University Press.

    Google Scholar 

Download references

Acknowledgements

I am grateful to Colin Koopman for providing feedback on an earlier draft of this paper. I would also like to thank two anonymous reviewers for their exceptionally helpful comments.

Funding

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

N/a.

Corresponding author

Correspondence to Paul Showler.

Ethics declarations

Ethics Approval

N/A.

Consent to Participate

N/A.

Consent to Publish

N/A.

Competing Interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Showler, P. The Moral Status of Social Robots: A Pragmatic Approach. Philos. Technol. 37, 51 (2024). https://doi.org/10.1007/s13347-024-00737-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-024-00737-9

Keywords

Navigation