Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege

Abstract

As engineers propose constructing humanlike automata, the question arises as to whether such machines merit human rights. The issue warrants serious and rigorous examination, although it has not yet cohered into a conversation. To put it into a sure direction, this paper proposes phrasing it in terms of whether humans are morally obligated to extend to maximally humanlike automata full human rights, or those set forth in common international rights documents. This paper’s approach is to consider the ontology of humans and of automata and whether ontological difference between them, that pertains to the very bases of human rights, affects the latter’s claims to full human rights. Considering common bases of human rights, can these bases tell us whether a certain ontological distinction of humans from automata—or a de facto distinction about humans tacitly acknowledged by full-rights-recognizing societies—makes a difference in whether humans are morally obligated to assign these entities full rights? Human rights to security also arise. The conclusion is that humans need not be under any moral obligation to confer full human rights on automata. The paper’s ultimate point is not to close the discussion with this ontological cap but to set a solid moral and legal groundwork for opening it up tout court.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    As to the terms “legal rights, “moral rights,” “human rights,” and “persons,” I strive to adhere to the following: By “full human rights,” I refer to the complete set of rights recognized in major rights documents and charters such as United Nations (United Nations 1948). Universal Declaration of Human Rights and Declaration of Rights of Indigenous Peoples (United Nations 2007). “Human rights” I as more general term that may not encompass every right in those documents. With one exception I avoid use of “legal rights” because of both its vagueness and, considering the hundreds of national governments across the globe, ambiguous. “Moral rights” refer to rights that some moral system may enjoin but no government or charter may yet endorse. “Person” is understood in the common notion of a human being, and “human being” is a member of the species Homo sapiens.

  2. 2.

    While many readers may dismiss the scenario of maximally humanlike automata as scientifically impossible, without my arguing against this point, I propose that it is philosophically worthwhile to look at an extreme sample case as a test for moral and sociopolitical assumptions. Similarly, Plato stipulated the unlikely Republic, testing our ideas of justice, and Putnam conceived the impossibly water-like XYZ testing our assumptions about reference. Furthermore, I overlook the Humanlike Automaton Marketing paradox, by which a corporation constructs an automaton so humanlike that to sell or buy it would amount to slavery, so the corporation cannot enter the market. (We may presume that perhaps a wealthy Silicon Valley idealist or misanthrope constructs the maximally human-like automaton in defiance of common ethics or simply to do it.)

  3. 3.

    IE seems to face the physical problem that complex life forms such as humans increase the universe’s energy, so, morally, they should all be eradicated. Thus, the moral theory defeats itself.

  4. 4.

    My approach, as will be seen, is hypothetical, resting conditionally upon a widely held belief about the nature of human rights and the properties of human beings that render them thusly deserving.

  5. 5.

    It also holds that CA and PA, because A does not say how X came into being. But A can be true without C’s or P’s being true because X may come into being without a constructor (or constructor’s purpose). This possibility for A is of central interest in this paper.

  6. 6.

    I do not deny tout court the impossibility of constructing something without a purpose. A bolt of lightning may hit a house, causing a person’s hand holding a hammer to hit a rock and form a perfect plate. The discussion could digress into a metaphysical problem of action and agency. Did the lightning construct the plate? Or does the accidental-constructor’s perception “construct” the plate—or someone walking into the room and declaring “Aha, a plate!” I am saying that for this article’s purposes, for simplicity, I leave out this highly improbable if conceivable possibility of accidental construction and retain the biconditional and the notion that constructors have a purpose in their construction. At the risk of apparent inconsistency, I retain the distinct predicates C and P to both (1) bring out the fact that, for all practical purposes, constructors have some sort of purpose in constructing, which the biconditional confirms, and (2) acknowledging that there is an underlying metaphysical problem of action and agency that I cannot pursue here.

  7. 7.

    [C(x,y) ∧ P(x,y,z)] → A(x),

    However, is it the case that

    A(x) → ~ [C(x,y) ∨ P(x,y,z)] ?

    It does not seem problematic to maintain

    [A(x) → ~ C(x,y)] ∧ [A(x) → ~ P(x,y,z)]

  8. 8.

    Assuming that we cannot negotiate normatively with God..

  9. 9.

    It has been argued (Miller 2014) that a person has a right to be the kind of being one is and that this right is implicit in rights documents. Furthermore, humans have a right to be their species; by extension, animals would have comparable right.

  10. 10.

    I acknowledge that in taking this stance in this section, by which humans have come into existence without a purpose behind their doing so, I risk alienating readers of a certain persuasion of creationism and perhaps some adamantly functionalist, teleological evolutionists. I must run that risk, while appealing to readers who concur with A S (x) that they should not be obliged to grant humanlike automata full human rights. I note that not all theists or even creationists need take issue with this approach. They may still view God’s having created this universe, setting it in motion and establishing its laws so that it takes on its own course, and so humans came into existence as a happenstance of those developments, much as in a universe without no deliberate act of creation.

  11. 11.

    Procreation by in-vitro fertilization poses no foil to the discussion. The human being who emerges after this technique has been applied to an egg and sperm does not “create” a human being except in a very loose sense of the word, no more than a couple’s having intercourse which leads to the coupling of egg and sperm does not create a human being but rather allows the already given biochemical processes of embryological development to form a human being.

  12. 12.

    Even narratives of a supreme being’s creation of human beings commonly do not depict the creator’s operating from normative rules such as hypothetical imperatives but rather from arbitrary command (in monotheistic tradition)], divine whim (animist traditions; (see Lee and Daly 1999), or something like the unfolding of the supreme being’s essence (Spinoza 1994).

  13. 13.

    See section below on how A S (x), not a comprehensive doctrine, allows for comprehensive doctrines to be built upon it, if desired.

  14. 14.

    Kant in some political writings (1970) does see humanity as having a certain trajectory of moral improvement and thus a type of destiny to strive for that purpose continually, for to do otherwise would seem against the very basis of practical reasoning. However, this fact would not mean that human beings who do not acknowledge or pursue that purpose thereby forsake their rights. That is, he does not build his theory of rights upon the notion that humans must pursue that purpose to merit rights. They merit those rights as humans, tout court, whatever they subsequently do.

  15. 15.

    Because the patient/subject provides the standard by which the violation is assessed, I diverge somewhat from Pogge’s historical interpretation. I see that metaphysical assumptions do not detract from the importance of the subject-as-gauge; the human subject as a given kind of entity with certain set characteristics (whatever those may be) forms the de facto basis for appropriate conduct. However, even if my interpretation of natural law in this context is incorrect and some kind of teleology is indeed inherent in the rules of public conduct, my overall argument is not harmed, because by the time of natural rights and later developments, any such teleology or purpose for human conduct is absent from the constraints-basis for human rights.

  16. 16.

    I emphasize that for those people who ascribe to the two- or three-place predicate, if they profess to democracy and human rights, they should recognize the one-place predicate A S ′(x) already mentioned and due further discussion in a later section.

  17. 17.

    It may be protested that some individuals are indeed brought into existence for a specific purpose, much like the automaton may be brought into existence for whatever purpose the constructor has in doing so. A couple thus may produce an offspring to help on the farm. However, this objection misses at least two points. One is that the ontological difference applies to the whole species and each member insofar as it is one of those species. The couple’s attempt at establishing a purpose for the child does not affect the fact the species came into existence by no such deliberate act. Second, those parents’ purpose is of the type that the child may or may not accept; it does not define the child. By contrast, an automaton’s constructors may have the purpose of building it to prove what amazing brains they are. The automaton has no option whether to accept this purpose or not; it is simply the defining purpose behind that entity’s construction, even if that purpose fails and if the automaton escapes and goes to live a life sipping cocktails by the seashore.

  18. 18.

    One may assert that an organism does in fact have a purpose, or purposes to “be reproduced” (whatever that may mean) or to reproduce. I find this assertion implausible, if not incoherent. To start, humans are well-characterized as beings who can establish their own purpose—and that purpose may certainly exclude reproduction, and the person is no less of a person for excluding such an action. Even if we look at other animals, it is not clear that their purpose is to reproduce, even if reproduction is the only way by which these beings come to be. If a rhinoceros does not reproduce, it is no less of a rhinoceros, and to say it has thereby failed its purpose as a rhinoceros is to pass an effective normative moral judgment upon a being, and one is behooven to establish a sufficiently complete moral system assuming the teleology of life is to reproduce. Consider the possibility that all rhinoceroses happen not to reproduce and the species dies out. It is not clear how they have failed a purpose, as if being rhinoceros is a necessary purpose. It is more straightforward to recognize that rhinoceroses have so happened to have arisen, they have endured as a species because they so happened to have reproduced. They have no particular duty or purpose to being-rhinoceros. If they do happen to reproduce, rhinoceroses simply continue to be.

  19. 19.

    The problem of human endangerment comes from robots with sufficiently large artificial intelligence to do harm but insufficiently developed morality. The recently formed organization the Future of Life Institute, whose members include Stephen Hawking and Bill Gates, exemplifies this concern in expressing that extensive development of AI poses one of the greatest potential threats to life. With insufficient morality, say like a human sociopath, a robot with high AI may not care if it is subject to criminal code, but its status as a full human being could only aid it, or its creator, in unconscionable acts.

References

  1. Ackerman E. (2012) Human Rights Watch is apparently terrified of military robots, but you shouldn’t be. IEEE Spectrum, Nov. 28. http://spectrum.ieee.org/automaton/robotics/military-robots/human-rights-watch-is-apparently-terrified-of-military-robots. Accessed 7 April, 2014.

  2. Altmnan J. (2013) Arms control for armed uninhabited vehicles: an ethical issue. Ethics Inform Tech 15(2), pp. 137–162.

    Article  Google Scholar 

  3. Anderson K, Waxman M. (2012) Human Rights Watch report on killer robots, and our critique. Lawfare, November 26. http://www.lawfareblog.com/2012/11/human-rights-watch-report-on-killer-robots-and-our-critique/. Accessed 7 April, 2014.

  4. Asimov I. (1950) I, Robot. Gnome Press. New York.

    Google Scholar 

  5. Baertschi B. (2012) The moral status of artificial life” Environ Ethics 21, pp. 5–18.

    Google Scholar 

  6. Basl J. (2014) Machines as moral patients we shouldn’t care bout (yet): the interests and welfare of current machines. Philo Tech 27, pp. 79–96.

    Article  Google Scholar 

  7. Bochinski,J. (1959). A precis of mathematical logic. Reidel, Dordrecht.

  8. Bryson J. (2000) A proposal for the humanoid Agent-Builder’s League (HAL). In: Barnden J. (ed), The proceedings of the AISB 2000 Symposium on Artificial Intelligence, Ethics and (Quasi-)Human Rights. Available at: http://www.cs.bath.ac.uk/~jjb/ftp/HAL00.html; accessed July 20, 2015.

  9. Bryson J. (2009) Building persons is a choice. An invited commentary on Anne Forest, “Robots and Theology”; Erwägen Wissen Ethik, November.

  10. Bryson J. (2010) Robots should be slaves. In: Wilks Y. (ed.) Close engagements with artificial companions: key social, psychological, ethical and design issues, pp 63–74). John Benjamins.

  11. Bryson J., Kime PP. (2011) Just a machine: why machines are perceived as moral agents.” The Twenty-Second International Joint Conference on Artificial Intelligence, Barcelona, Spain.

  12. Coeckelbergh M. (2014) The moral standing of machines: toward a relational and non-Cartesian moral hermeneutics. Philo Tech 27, pp. 61–77.

    Article  Google Scholar 

  13. Darling K. (2012) Extending legal rights to social robots. We Robot Conference. University of Miami, April. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2044797. Accessed 8 April, 2014.

  14. Davenport D. (2014) Moral mechanisms. Philo Tech 27, pp. 47–60.

    Article  Google Scholar 

  15. Deplazes-Zemp A. (2012). The moral impact of synthesizing organisms: biocentric views on synthetic biology. Environ Ethics 21, pp. 63–83

    Google Scholar 

  16. Donnelly J. (2001). Non-discrimination and sexual orientation: making a place for sexual minorities in the global human rights regime. In: Hayden P. (ed.) The philosophy of human rights (pp. 547–573). Paragon House, St. Paul, MN.

    Google Scholar 

  17. Flannery K, Marcus J. (2012). The creation of inequality: how our prehistoric ancestors set the stage for monarchy, slavery, and empire. Harvard University Press, Cambridge, MA

    Google Scholar 

  18. Floridi L. (1999) Information ethics: on the philosophical foundation of computer ethics” Ethics Inform Tech 1, pp. 37–56.

    Article  Google Scholar 

  19. Floridi L. (2008) Information ethics: its nature and scope. In:: van den Hoven, J, Weckert J (eds.). Information technology and moral philosophy, pp. 40–65. Cambridge University Press, Cambridge.

    Google Scholar 

  20. Freitas, RA. (1985). The legal rights of robots. Student lawyer 13, pp. 54–56. http://www.rfreitas.com/Astro/LegalRightsOfRobots.htm. Accessed 4 April, 2014.

  21. Gunkel DJ. (2014) A vindication of the rights of machines. Phil Tech 27, pp. 113–132.

    Article  Google Scholar 

  22. Grotius H. (2001) The rights of war and peace” (excerpts) In: Hayden P. (ed.) The philosophy of human rights, pp. 48–53. Paragon House, St. Paul, MN.

    Google Scholar 

  23. Hauskeller M. (2014) Sex and the posthuman condition.: Palgrave Macmillan, Hampshire, UK.

    Google Scholar 

  24. Hellström T. (2013) On the moral responsibility of military robots. Ethics Inform Tech 15(2), pp. 99–107.

    Article  Google Scholar 

  25. Inayatullah S. (1988) The rights of robots: technology, law and culture in the 21st Century.” Futures 20(2), pp. 119–136.

    Article  Google Scholar 

  26. Kant I. (1970). Political writings. Reiss HS (ed.), Nisbet HB (tr.) Cambridge University Press, Cambridge.

  27. Latour B. (2011) Love your monsters: why we must care for our technologies as we do our children.” Breakthrough J 2. http://thebreakthrough.org/index.php/journal/past-issues/issue-2. Accessed 15 May, 2014.

  28. Lee RB, Daly R. (eds.) (1999). The Cambridge encyclopedia of hunters and gatherers. Cambridge University Press, Cambridge.

    Google Scholar 

  29. Lieber, J. (1985) Can animals and machines be persons? a dialogue. Hackett: Indianapolis IN.

    Google Scholar 

  30. Lundström L. (dir.) (2012) Real people, Television series.

  31. Miller L. (2014) Is Species integrity a human right?” Human Rights Rev 15, pp. 177–199.

    Article  Google Scholar 

  32. Moody-Adams M. M. (1999) The idea of moral progress. Metaphilosophy 30(3), pp. 168–185.

  33. Nolan M. (1997).The myth of soulless women. First Things, April. http://www.firstthings.com/article/1997//04/002-the-mth-of-soulless-women. Accessed 16 May, 2014.

  34. Nolan M. (2006) Do women have souls? the story of three myths. The Church in History Information Center. Available at www.churchinhistory.org. Accessed 8 April, 2014.

  35. Noorman M. Johnson DG. (2014) Negotiating autonomy and responsibility in military robots. Ethics Inform Tech 16, pp. 51–62.

    Article  Google Scholar 

  36. Nussbaum M. (2007). On moral progress: A response to Richard Rorty. The University of Chicago Law Review 74(3) pp. 939–960.

  37. Pogge TW. (2001) How should human rights be conceived? In: Hayden P (ed.) The philosophy of human rights, pp. 187–211. Paragon House, St. Paul MN.

    Google Scholar 

  38. Rawls J. (1971) A theory of justice. Belknap, Cambridge MA.

    Google Scholar 

  39. Rawls J. (1986) Political liberalism. Columbia University Press, New York.

    Google Scholar 

  40. Rawls J. (2001 Justice is fairness. Belknap, Cambridge MA.

    Google Scholar 

  41. Rorty R. (2007) Dewey and Posner on pragmatism and moral progress. The University of Chicago Law Review 74(3) pp. 915–927.

  42. Schark M. (2012). Synthetic biology and the distinction between organisms and machines. Environ Val 21, pp. 19–41.

    Article  Google Scholar 

  43. Singer P. (1981) The expanding circle: ethics, evolution, and moral progress. Princeton University Press, Princeton.

    Google Scholar 

  44. Singer P, Cavalieri P (eds.). (1993) The Great Ape Project: equality beyond humanity. St, Martin’s Griffin, New York.

  45. Søraker HJ. (2014) Continuities and discontinuities between humans, intelligent machines, and other entities. Phil Tech 27, pp. 31–46.

    Article  Google Scholar 

  46. Spinoza B. (1994) A Spinoza reader: The Ethics and other works. In: Curley E (tr.). Princeton University Press, Princeton.

  47. Taylor C. (1986) Respect for nature. Princeton University Press, Princeton.

    Google Scholar 

  48. Torrance S. (2014) Artificial consciousness and artificial ethics: between realism and social relationism. Phil Tech 27, pp. 9–29.

    Article  Google Scholar 

  49. United Nations (1948). Universal declaration of human rights.

  50. United Nations (2007). United Nations declaration of rights of indigenous peoples.

  51. Veruggio G. (2008) Roboethics: philosophical, social and ethical implications of robotics. Presented at: International Symposium Robotics: New Science, Rome, February 20. http://pt.slideshare.net/igorod2/8-veruggio-roboethics-skolkovo-31835653. Accessed 7 April, 2014.

  52. Von Willigenberg T. (2008) Philosophical reflection on bioethics and limits. In: Düwell M, Rehmann-Sutter C.. Mieth D (eds.), The contingent nature of life, pp. 147–156). Springer, Berlin.

  53. Wallach W, Allen C. (2013) Framing robot arms control. Ethics Inform Tech 15(2), pp. 125–135.

    Article  Google Scholar 

  54. Whitby B. (2008) Sometimes it’s hard to be a robot: a call for action on the ethics of abusing artificial agents. Interacting with Computers 20, pp. 326–333.

    Article  Google Scholar 

  55. Wollstonecraft M. (2001) A vindication of the rights of women. In: Hayden P (ed.) The philosophy of human rights (pp. 101–108). Paragon House: St. Paul MN.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Lantz Fleming Miller.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Miller, L.F. Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege. Hum Rights Rev 16, 369–391 (2015). https://doi.org/10.1007/s12142-015-0387-x

Download citation

Keywords

  • Automata
  • Full human rights
  • Moral rights
  • Moral status
  • Ontological bases for rights
  • Security