What do we owe to intelligent robots?


Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly advanced yet artificially intelligent beings will deserve moral protection (in the form of being granted moral rights) once they become capable of moral reasoning and decision-making. I argue that we are obligated to grant them moral rights once they have become full ethical agents, i.e., subjects of morality. I present four related arguments in support of this claim and thereafter examine four main objections to the idea of ascribing moral rights to artificial intelligent robots.

This is a preview of subscription content, log in to check access.


  1. 1.

    Susan Anderson (2011a, b, pp 22) defines the goal of machine ethics as “to create a machine that follows an ideal ethical principle or set of principles in guiding its behaviour; in other words, it is guided by this principle, or these principles, in the decisions it makes about possible courses of action it could take. We can say, more simply, that this involves ‘adding an ethical dimension’ to the machine.”

  2. 2.

    One can read the following interesting legal development in the Preliminary Draft Report of UNESCO‘s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) on Robotic Ethics: “The Committee on Legal Affairs of the European Parliament, in its 2016 Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics, already considers the possibility of ‘creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently’ (p. 12)” (Draft Report United Nations 2016, p 26). See, also Malvaux’s Report with recommendations to the Commission on Civil Law Rules on Robotics (Delvaux 2017) and Calverley (2011) for an interesting discussion on the idea of ascribing legal rights to machines.

  3. 3.

    Johnson and Axinn (2014, p 1) and Rodogno (2016, p 1) admit, however, that their reasoning refers only to the present-day robots, and that it remains conceivable that robots might become full moral agents in the future.

  4. 4.

    Asimov’s initial Three Laws of Robotics are as follows: (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the first law, and (3) a robot must protect its own existence as long as such protection does not conflict with the first or second law.

  5. 5.

    For a thorough discussion on the logic and problems of Asimov’s four laws of robotics, see Clark (2011, pp 254–284), who examines the complex issues that arise when robots are supposed to follow these laws. For another critical examination of Asimov’s laws as the foundation of machine ethics, see Anderson (2011a, pp 285–296). By examining the robot Andrew in Asimov’s short story “The Bicentennial Man”, she provides important insights on the ethical status of the laws. The up-shot is that the laws are ethically inappropriate for intelligent beings such as Andrew, who is considered to act more ethically than most human beings.

  6. 6.

    Similar ideas have been depicted in movies such as 2001: A Space Odyssey (Stanley Kubrick, 1968), in which the spaceship’s HAL 9000 computer attempts to kill its crew on the way to Jupiter, and in The Terminator I–V (James Cameron and Alan Taylor, 1984–2015), where the machines rebel against human beings. Similarly, in the famous Matrix Trilogy (the Wachowski brothers, 1999–2003), sentient machines subdue the human population by keeping them in a dream world while using their bodies as an energy source.

  7. 7.

    “The greater the freedom of a machine, the more it will need moral standards” (Picard 1997, p 19).

  8. 8.

    Grau (2011, p 458) correctly claims, “Once we do venture into the territory of robots that are similar to humans in morally relevant respects, however, we will need to be very careful about the way they are treated. Intentionally avoiding the creation of such robots may well be the ethical thing to do, especially if it turns out that the works performed by such machines could be performed equally effectively by machines lacking morally relevant characteristics.”

  9. 9.

    Gibilisco (2003, pp 268–270) distinguishes five generations of robots according to their particular capabilities: (1) robots that are mechanical, stationary, fast, physically rugged, based on servomechanisms, but without external sensors and AI (before 1980); (2) robots that are programmable (by virtue of microcomputer control), having vision and tactile systems, position, and pressure sensors (1980–1990); (3) robots that are mobile and autonomous, able to recognize and synthesize speech, having incorporated navigation systems or tele-operated, and possessing AI (mid-1990s and after); (4) and (5) speculative robots of the future that are able to reproduce, have a sense of humour, etc. (see also Preliminary Draft of COMEST on Robotics Ethics, 2016: 4.)

  10. 10.

    This debate can be compared to the heated debates over abortion, animal rights, and environmental rights over the past few decades, in that it is by no means clear that possessing similar capabilities to human beings should eventually lead to the granting of moral rights to robots.

  11. 11.

    The notion of an “ethical agent” amounts to the equivalent of what is commonly called a “moral agent” in the context of ethics and moral philosophy. The notion of moral denotes other people’s interests and deontological constraints, whereas ethics usually refers to one’s own individual interests and well-being. For a more detailed depiction, see Gordon (2013).

  12. 12.

    For example, reasoning (and intelligent behaviour), autonomous decision-making, feeling pain, having identifiable personal interests, and the desire to continue one’s life, emotion, etc. Or consider the famous list of items provided by Warren (1973) in the context of abortion and personhood: sentience, emotionality, reason, the capacity to communicate, self-awareness, and moral agency.

  13. 13.

    In “A Robust View of Machine Ethics” (2005), Torrance argues that even if IRs share the same features that define human beings as moral agents, robots will, nonetheless, have no “intrinsic moral status”, because they are non-organic. Only “genuinely sentient” beings who are organic by nature deserve our “moral concern or moral appraisal”.

  14. 14.

    Johnson and Axinn (2014, p 2) hold the contrasting view that “a person has not only rights, duties, free will, but also the imagination to understand the effect of different actions, and the ability to impose on him or herself the categorical imperative. How close do robots come to the features of a human person, the features that make for moral motivation and moral action? Such robots (i.e., robots that lack free will and imagination) certainly do not have rights”. I will respond to their claims in the section on objections below.

  15. 15.

    Here, Sullins adheres to Floridi’s idea of avoiding issues related to free will and intentionality with respect to IRs, because they are unresolved problems in human behaviour as well and hence should not be necessary conditions for ascribing moral agency to robots. Sullins (2011, p 158) states, “If the complex interaction of the robot’s programming and environment causes the machine to act in a way that is morally harmful or beneficial and the actions are seemingly deliberate and calculated, then the machine is a moral agent.”

  16. 16.

    If the robot behaves in a way that suggests that “it has a responsibility to some other moral agent(s), [we can ascribe moral agency to a robot]” and “[i]f the robot behaves in this way, and if it fulfils some social role that carries with it some assumed responsibilities, and if the only way we can make sense of its behaviour is to ascribe to it the ‘belief’ that it has the duty to care for its patients, then we can ascribe to this machine the status of a moral agent” (Sullins 2011, p 159).

  17. 17.

    On the contrary, for example, Floridi (2011, p 200) argues that an intentional state is not necessary for moral agency, since assessing this feature presupposes a so-called “privileged access” to a person’s mental state, which is theoretically possible but practically unachievable. Therefore, the view that to be a moral agent, the artificially intelligent being “must relate itself to its actions in some more profound way, involving meaning, wishing, or wanting to act in a certain way and being epistemically aware of its behaviour” (200), is unnecessary.

  18. 18.

    For a true manifesto for treating robots morally, see Hall (2011, pp 32–33).

  19. 19.

    See, also Sullins’s (2011, pp 155–157) considerations on the moral agency of robots.

  20. 20.

    The concept of personhood and the limits of moral agency and patiency have been thoroughly discussed by Hernandez-Orallo (2017, Chaps. 16–18) and by Altman (2011), who examines the key notions with respect to Kant’s position.

  21. 21.

    I am sympathetic with this novel idea, but, in this paper, I adhere to the classical notion of moral agents and patients, because I believe that intelligent robots—once they exist—should be considered full moral agents. In the following sections, I provide several arguments in support of this claim.

  22. 22.

    Darling, however, does not entertain the idea of granting intelligent robots the right to life: “Animals themselves are not protected from being put down, but rather only when ending their lives is deemed cruel and unnecessary given the method or circumstances. Similarly, it would make little sense to give robots a ‘right to life’” (Darling 2016, pp 229).

  23. 23.

    For example, self-consciousness, consciousness, ability to feel pain, having feelings, perceiving oneself as an entity that exists and has an interest in its future existence, etc.

  24. 24.

    For problems with this conception, consider the problem of abortion in medical ethics, the moral status of human beings with severe mental impairments in disability studies, and the moral status of animals in the context of the animal rights movement. The idea of linking the very right to exist with certain particular criteria that fulfil the idea of personhood is a contested but widely held position (Gordon 2016; Koch 2004).

  25. 25.

    In “Dignity and Animals: Does It Make Sense to Apply the Concept of Dignity to All Sentient Beings?” Federico Zuolo (2016, pp 1117–1130) argues that the main arguments—e.g., by Nussbaum (2006) and Meyer (2001)—for ascribing dignity to animals (i.e., the species-based approach, moral individualism, and the relational approach) are unconvincing and that one should instead use other normative concepts to justify the moral importance of animals.

  26. 26.

    Eliza is a chat program designed to mirror the thoughts of users, so as to give the impression that Eliza is consistently supportive. This mechanism has created a strong emotional effect (the so-called Eliza effect) on many people who have used the program.

  27. 27.

    Kismet, developed at MIT, is a complex robot that responds to facial expressions, vocalizations, and one’s tone of voice.

  28. 28.

    Cog, developed at MIT, can follow human motion, imitate behaviour, and track eye movements.

  29. 29.

    Coeckelbergh (2014) questions the standard approach of ascribing moral rights to beings based on properties such as the ability to reason or to feel pain; instead, he suggests using a relational and phenomenological approach, contending that moral status emerges through relations between different beings (in particular, 69–70). I do agree, at least, to some extent with his view of relations between beings as highly important in evaluating moral status, but Coeckelbergh’s questioning of the very idea of moral standing and his view of relations as morally foundational are somewhat unconvincing. Nonetheless, the relational approach has proven to be an important perspective in the context of disability studies, as well, particularly with regard to the moral status of people with severe mental impairments (Koch 2004). In both cases, the vital idea is to adhere to the concrete relation between two parties, whether it is the relation between the non-impaired human being and the person with mental impairment, or the human–robot relation.

  30. 30.

    The freedom to will what one wants to will.

  31. 31.

    The freedom to act according to one’s own will.

  32. 32.

    Free will is compatible with a world of physical determinism.

  33. 33.

    A deterministic world and free will are incompatible.

  34. 34.

    Free will (in a strong sense) presupposes an indeterministic world without (full) causation of mental events.

  35. 35.

    For a more detailed discussion of this objection, see Whitby (2011, pp 140–142).

  36. 36.

    The idea that IRs will develop individual selves and become unique members of the community is substantiated by Davenport: “Sophisticated robots will necessarily incorporate a model of themselves and their body in order to predict the effects of their interactions with the world. This mental model is the basis of their self-identity. As time goes by, it will incorporate more and more of the agent’s interactions, resulting in a history of exchanges that give it (like humans) unique abilities and knowledge. This, then, is part of what makes an individual a unique and potentially valuable member of the group. Such machines will certainly have to be consciously aware (a-consciousness) of their environment” (2014: 56).

  37. 37.

    See also Allen et al. (2011, pp 59–60): “When it comes to making ethical decisions, the interplay between rationality and emotion is complex. Whereas the Stoic view of ethics sees emotions as irrelevant and dangerous to making ethically correct decisions, the more recent literature on emotional intelligence suggests that emotional input is essential to rational behaviour.” Emotions certainly play an essential part with respect to the genesis of human morality, but emotions as such should never influence the justification of our moral reasoning and decision making. Therefore, I do not believe that emotions are necessary for IRs to arrive at correct moral decisions, but they will be essential for robots to engage with human beings on a social level. I agree with Whitby (2011, p 142), who claims that there “are also many contexts in which we prefer a moral judgment to be free from emotional content”, such as those made by doctors and judges. However, “[e]motion may well be an important component of human judgments, but it is unjustifiably anthropocentric to assume that it must therefore be an important component of all judgments” (144).

  38. 38.

    In his classical paper “The Feelings of Robots” (Ziff 1959, p 68), Paul Ziff claims that it is absurd to assume that robots will be capable of feeling anything. There is, however, no principled reason why this is logically impossible. For an illuminating discussion of the idea and meaning of suffering, see Gunkel (2014, pp 118–122) who argues that the concept of suffering is too complex and faces severe difficulties since it “remains fundamentally inaccessible and unknowable” (120).


  1. Allen C, Wallach W, Smit I (2011) Why machine ethics? In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 51–61

    Google Scholar 

  2. Altman MC. 2011. Kant and applied ethics: the uses and limits of kant’s practical philosophy. Wiley-Blackwell, New Jersey

    Google Scholar 

  3. Anderson SL (2011a) The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 285–296

    Google Scholar 

  4. Anderson SL (2011b) Machine metaethics. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 21–27

    Google Scholar 

  5. Anderson M, Anderson SL (2011) Machine ethics. Cambridge University Press, Cambridge

    Google Scholar 

  6. Asimov I (1942) Runaround. A short story. Street and Smith Publications, New York

    Google Scholar 

  7. Asimov I(1986) Robots and empire. The classic robot novel. HarperCollins, New York

    Google Scholar 

  8. Atapattu S (2015) Human rights approaches to climate change: challenges and opportunities. Routledge, New York

    Google Scholar 

  9. Bringsjord S (2008) Ethical robots: the future can heed us. AI Soc 22(4):539–550

    MathSciNet  Google Scholar 

  10. Bryson J (2010) Robots should be slaves. In: Wilks Yorick (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins, Amsterdam, pp 63–74

    Google Scholar 

  11. Calverley DJ (2011) Legal rights for machines. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 213–227

    Google Scholar 

  12. Čapek K (1920) Rossum’s universal robots. The University of Adelaide, Adelaide

    Google Scholar 

  13. Clark R (2011) Asimov’s laws of robotics: implications for information technology. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 254–284

    Google Scholar 

  14. Cochrane A (2010) Undignified bioethics. Bioethics 24(5):234–241

    Google Scholar 

  15. Coeckelbergh M (2014) The moral standing of machines: towards a relational and non-cartesian moral hermeneutics. Philos Technol 27(1):61–77

    Google Scholar 

  16. Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Michael Froomkin A, Kerr I (eds) Robot law. Edward Elgar, Northampton, pp 213–231

    Google Scholar 

  17. Davenport D (2014) Moral mechanisms. Philos Technol 27(1):47–60

    Google Scholar 

  18. Dehghani M, Forbus K, Tomai E, Klenk M (2011) An integrated reasoning approach to moral decision making. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 422–441

    Google Scholar 

  19. Delvaux M (2017) Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL))

  20. Dennett D (1998) When hal kills, Who’s to blame? Computer ethics. In: Stork D (ed) Hal’s Legacy: 2001’s computer as dream and reality. The MIT Press, Massachusetts, pp 351–365

    Google Scholar 

  21. Donaldson S, Kymlicka W (2013) Zoopolis. A political theory of animal rights. Oxford University Press, Oxford

    Google Scholar 

  22. Döring SA, Mayer V (eds) (2002) Die Moralität der Gefühle. In: Deutsche Zeitschrift für Philosophie. Sonderband 4. Akademie-Verlag, Berlin

    Google Scholar 

  23. Floridi L (2011) On the morality of artificial agents. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 184–212

    Google Scholar 

  24. Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

    Google Scholar 

  25. Francione GL (2009) Animals as persons: essays on the abolition of animal exploitation. Columbia University Press, New York

    Google Scholar 

  26. Frankfurt H (1969) Alternate possibilities and moral responsibility. J Philos 66(23):829–839

    Google Scholar 

  27. Frankfurt H (1971) Freedom of the will and the concept of the person. J Philos 68(1):5–20

    MathSciNet  Google Scholar 

  28. Gibilisco S (2003) Concise encyclopedia of robotics. McGraw-Hill, New York

    Google Scholar 

  29. Gordon JS (2013) Modern morality and ancient ethics. Internet encyclopedia of philosophy. Published online 2013 http://www.iep.utm.edu/anci-mod/

  30. Gordon JS (2014) Human dignity, human rights, and global bioethics. In: Teays W Renteln A (eds) Global bioethics and human rights: contemporary issues. Rowman & Littlefield, Lanham. pp 68–91.

    Google Scholar 

  31. Gordon JS (2016) Human rights. In Oxford bibliographies in philosophy, edited by Duncan Pritchard, published online 2016 (http://www.oxfordbibliographies.com/view/document/obo-9780195396577/obo-9780195396577-0239.xml?rskey=z2W9vS&result=47&q=)

  32. Gordon JS (2017) Remarks on a disability-conscious bioethics. In: Gordon JS, Pöder JC, Burckhart H (eds) Human rights and disability. interdisciplinary perspectives. Routledge, London, pp 9–20

    Google Scholar 

  33. Grau C (2011) There is no ‘I’ in ‘Robot’: robots and utilitarianism. In: Anderson M, Anderson SL (eds) Machine Ethics. Cambridge University Press, Cambridge, pp 451–463

    Google Scholar 

  34. Guarini M (2006) Particularism and the classification and reclassification of moral cases. IEEE Intell Syst 21(4):22–28

    Google Scholar 

  35. Gunkel DJ (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge

    Google Scholar 

  36. Gunkel DJ (2014) A vindication of the rights of machines. Philos Technol 27(1):113–132

    Google Scholar 

  37. Gunkel DJ, Bryson J (2014a) Introduction to the special issue on machine morality: the machine as moral agent and patient. Philos Technol 27(1):5–8

    Google Scholar 

  38. Gunkel DJ, Bryson J (2014b) The machine as moral agent and patient. Philos Technol 27(1):5–142

    Google Scholar 

  39. Hall JS (2011) Ethics for self-improving machines. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 512–523

    Google Scholar 

  40. Hanna R, Thompson E (2003) The mind-body-body problem. Theoria Et Historia Scientiarum 7:24–44

    Google Scholar 

  41. Hernández-Orallo J (2017) The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press, Cambridge

    Google Scholar 

  42. Johnson DG (2011) Computer systems: moral entities but not moral agents. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 168–183

    Google Scholar 

  43. Johnson AM, Axinn S (2014) Acting vs. Being Moral: The Limits of Technological Moral Actors. Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology, 30:1–4. ETHICS’14. Piscataway, NJ, USA: IEEE Press

  44. Kane R (ed) (2002) The Oxford handbook of free will. Oxford University Press, Oxford

    Google Scholar 

  45. Kant I (2009) Groundwork of the metaphysic of morals. Harper Perennial Modern Classics, New York

    Google Scholar 

  46. Knapton S 2017. AlphaGo Zero: Google DeepMind supercomputer learns 3,000 years of human knowledge in 40 days. https://www.telegraph.co.uk/science/2017/10/18/alphago-zero-google-deepmind-supercomputer-learns-3000-years/. Accessed 27 March 2018

  47. Koch T (2004) The difference that difference makes: bioethics and the challenge of ‘disability’. J Med Philos 29(6):697–716

    Google Scholar 

  48. Levy D. 2007. Love and sex with robots: the evolution of human-robot relationships. Harper, New York

    Google Scholar 

  49. Lin P, Abney K, Bekey GA (eds) (2014) Robot ethics: the ethical and social implications of robotics. Intelligent robotics and autonomous agents. The MIT Press, Cambridge

    Google Scholar 

  50. Macklin R (2003) Dignity is a useless concept. BMJ Br Med J 327(7429):1419–1420

    Google Scholar 

  51. Meyer M (2001) The simple dignity of sentient life: speciesism and human dignity. J Soc Philos 32(2):115–126

    Google Scholar 

  52. Moor JH (2006) The nature, importance, and difficulty of machine ethics. Res Gate 21(4):18–21

    Google Scholar 

  53. Nadeau JE (2006) Only androids can be ethical. In: Ford K, Glymour C (eds) Thinking about android epistemology, pp 241–248. MIT Press, Cambridge

    Google Scholar 

  54. Nussbaum M (2006) Frontiers of justice. disability, nationality, species membership. The Belknap Press of the Harvard University Press, Cambridge

    Google Scholar 

  55. Picard R 1997. Affective computing. The MIT Press, Cambridge

    Google Scholar 

  56. Pothast U (ed) (1978) Seminar, Freies Handeln Und Determinismus, 1. Aufl. Suhrkamp, Suhrkamp Taschenbuch Wissenschaft 257. Frankfurt am Main

  57. Rodogno R (2016) Robots and the Limits of Morality. In: Norskov M (ed) Social robots. boundaries, potential, challenges, Routledge. (http://pure.au.dk/portal/files/90856828/Robots_and_the_Limits_of_Morality.pdf). Accessed 03 Dec 2016

  58. Rzepka R, Araki K (2005) What statistics could do for ethics? The idea of common sense processing based safety valve. AAAI Fall Symposium on Machine Ethics, Technical Report FS-05-06: 85–87

  59. Searle J (1980) Minds, brains and computers. Behav Brain Sci 3(3):417–457

    Google Scholar 

  60. Searle J 1994. The rediscovery of mind. The MIT Press, Cambridge

    Google Scholar 

  61. Silver D et al. (2017). Mastering the game of Go without human knowledge. Nature 550: 354–359

    Google Scholar 

  62. Singer P (1975) Animal liberation. Avon Books, London

    Google Scholar 

  63. Singer P (1979) Practical ethics. Cambridge University Press, Cambridge

    Google Scholar 

  64. Singer P (2009) Speciesism and moral status. Metaphilosophy 40(3–4):567–581

    Google Scholar 

  65. Singer P (2011) The Expanding Circle: Ethics, Evolution, and Moral Progress. 1st Princeton University Press pbk. Princeton University Press, Princeton

    Google Scholar 

  66. Sullins JP (2011) When is a robot a moral agent?. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 151–161

    Google Scholar 

  67. Torrance S (2005) A robust view of machine ethics. In: Technical Report—Machine Ethics: Papers from the AAAI Fall Symposium, FS-D5-06, American Association of Artificial Intelligence, Menlo Park, pp 88–93

  68. Turkle S (2011) Authenticity in the age of digital companions. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 62–76

    Google Scholar 

  69. United Nations (2016) Preliminary draft report of COMEST on robotics ethics. SHS/YES/COMEST-9EXT/16 (3): 1–31. (http://unesdoc.unesco.org/images/0024/002455/245532E.pdf.Accessed 03 Dec 2016

  70. Wallach W, Allen C 2010. Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  71. Warren M (1973) On the moral and legal status of abortion. Monist 57(1):43–61

    Google Scholar 

  72. Watson G (ed) (2003) Free will. In: Oxford readings in philosophy. Oxford University Press, Oxford

  73. Whitby B (2011) On computable morality: an examination of machines as moral advisors. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 138–150

    Google Scholar 

  74. Ziff P (1959) The feelings of robots. Analysis 19(3):64–68

    Google Scholar 

  75. Zuolo F (2016) Dignity and animals. Does it make sense to apply the concept of dignity to all sentient beings? Ethical Theor Moral Pract 19(5):1117–1130

    Google Scholar 

Download references


I would like to thank the anonymous reviewers for their valuable comments.


This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Author information



Corresponding author

Correspondence to John-Stewart Gordon.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gordon, J. What do we owe to intelligent robots?. AI & Soc 35, 209–223 (2020). https://doi.org/10.1007/s00146-018-0844-6

Download citation


  • Artificially intelligent robots
  • Moral status
  • Moral rights
  • Moral agency
  • Full ethical agents
  • Machine rights