Skip to main content

Advertisement

Log in

The hard limit on human nonanthropocentrism

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Further, debate over legal personhood for robots has crossed over into the mainstream (Prodhan 2016; Floridi and Taddeo 2018).

  2. “Intelligent machine,” as used here, refers to a reasonably sophisticated product of artificial intelligence (AI) or related disciplines. An intelligent machine may be prominently hardware, as with a robot or digital computer, or primarily software, as with a virtual agent or software-based system. An intelligent machine may stand alone or be embedded in another artifact. It may be silicon-based or not—as with products from the field of synthetic biology. Further, an intelligent machine, for purposes of this essay, could be a hybrid of two or more materials—a silicon-based digital computer interfaced with a neural circuit made from biological material, perhaps.

  3. Although some may wish to stretch the meaning of “human being,” here I use “human” and “human being” to refer to a member of Homo sapiens.

  4. The “human” qualifier is necessary if one imagines an intelligent machine designed not to regard humans as the only entities with intrinsic moral value.

  5. For example, one could favor a particular non-human species over another species (human or non-human).

  6. There are other versions of biocentrism besides Taylor’s version (Thompson 2017, pp. 80–81). Thompson also points out that Taylor’s biocentrism is individualistic, whereas holistic biocentrist theories are concerned with set(s) of living objects, such as species.

  7. The “weak anthropocentric intrinsic value” of Hargrove (1992) is not the same thing as the “weak [ethical] anthropocentrism” of Thompson (2017). Hargrove observed that “anthropocentric” was being used incorrectly as a synonym for “instrumental” (pp. 183–184) His use of “weak anthropocentric” implies anthropocentric value that need not be instrumental—value could be intrinsic instead. As with Thompson’s use of “weak [ethical] anthropocentrism,” Hargrove allows intrinsic value for some non-humans. With Hargrove, however, it is not clear (to me) that human intrinsic value always beats non-human moral value, as with Thompson’s definition of “weak [ethical] anthropocentrism.”.

  8. Thompson (2017) clarifies Hargrove’s nonanthropocentric intrinsic value by offering, as an example, the “good of its own” of a living organism. As an example of anthropocentric intrinsic value, Thompson observes how parents (non-instrumentally) value the life of a child for its own sake (pp. 82–83).

  9. Hargrove’s (1992, p. 192) interest is in protecting caves.

  10. Coeckelbergh and Gunkel (2016) have referred to this as “a relational and other-oriented concept” after the former’s “relational turn” (Coeckelbergh 2012) and the latter’s “thinking otherwise” (Gunkel 2007). I have just shortened their terminology to the other-oriented/relational challenge.

  11. See Gunkel (2012) for a similar view on the moral agency/moral patiency distinction.

  12. By contrast, the “centrist approach” resides more in the analytic philosophy tradition (Gunkel 2007, p. 175).

  13. A similar critique is found in Gunkel (2018b, chap. 3).

  14. Prima facie this seems similar to a kind of position found in Plato’s Theaetetus (M. Ananth, personal communication, June 28, 2020).

  15. Still, Lagerspetz (2007) believes that Sterba mistakenly “does not consider that there may be uses for ethical theory other than just their narrowly normative and practical employments” (p. 189). Further, Lagerspetz states that “…Sterba’s argument rests on the idea that the only interesting thing about the moral philosophies of Kant, Mill, or Aristotle is, as it were, light theory—to be used for a practical arbitration of whatever issues are being debated among academics or in the media” (p. 189).

  16. In the novel, Boulle (1963) describes the ape-takeover (at least partially) in terms of human abdication. For example, one woman’s experience with her once-loyal, long-time gorilla servant goes like this: “I was too frightened. I could not go on living like this. I preferred to hand the place over to my gorilla. I left my own house.”.

  17. The novel suggests that the ape population at the time of takeover could be approximately equal to the human population. It also suggests that the ape-takeover would be quick (e.g., on Soror, it seemed to take no more than a few years, if that long) and not too bloody (e.g., see previous footnote). However, in this hypothetical scenario, a utilitarian calculation might still be difficult. For example, if Ulysse executes his plan, humans would be in control and would still have talking, domesticated ape-servants. These would simply be less ambitious and more subservient. If Ulysse does not execute his plan, apes would be in control, but may not (initially) have talking, domesticated, subservient human-servants (e.g., due to the flight of humans out of populated areas). Similarly, if Ulysse does not execute his plan, many humans suddenly would be faced with the hardship of primitive conditions. However, some could also regain their previous vigor and initiative as they acclimate to these conditions. On the other hand, if Ulysse executes his plan, it is possible that now-subservient, talking, domesticated ape-servants would be trapped in a limbo-state between their previous freedom as wild animals (i.e., prior to their domestication) and a now-denied opportunity to develop full autonomy. No doubt there are other issues that would make utilitarian calculations difficult in this hypothetical scenario. For the purposes of my thought experiment, though, it does not seem any more unreasonable to suppose that a utilitarian calculation would not favor Ulysse’s plan versus favor it (or be indifferent to a choice between the two).

  18. Bostrom (2014/2016) cautions that small sample sizes and other methodological issues do not permit drawing “strong conclusions” from these results (p. 25). Human-level intelligence (in a machine) is roughly equivalent to what some refer to as “strong AI” or “artificial general intelligence” (AGI). See Bostrom (2014/2016, p. 22) for a brief discussion. Note that in this context, it does not seem to be implied that a strong AI (or AGI) must be conscious. There also seems to be no distinction made between a strong AI that thinks and is intelligent versus one that merely simulates thinking and intelligence. A machine with human-level intelligence would exhibit at least as much intelligence as a typical human being in a broad number of domains. Concerning “superintelligence,” Bostrom considers several forms, which this could take, including super artificial intelligence (AI), cognitively enhanced humans, sophisticated brain–computer interfaces, etc. For this essay, I emphasize his discussion of super AI. “Intelligence” of a super AI would greatly surpass human intelligence in most domains.

  19. The first day of the inaugural AAAI/ACM conference on Artificial Intelligence, Ethics, and Society, held in New Orleans, LA, USA, Feb. 1–3, 2018, focused largely on the “value alignment” problem. The systems under consideration were domain-specific decision-making systems trained on data from past human transactions. Machine learning techniques used to create such systems have shown a tendency to incorporate human bias (racial, gender, etc.) gleaned from the training data into the final decision-making systems intended to be deployed for use by society. “Value alignment” research aims at remedying this problem.

  20. See Torrance (2008) for a related observation involving artificial agents and sentience.

  21. Anthropomorphization is the human tendency to attribute human characteristics to non-humans. Although this tendency is anthropocentric, to the extent that it causes humans to extend moral consideration to non-humans, this tendency is nonanthropocentric in its effect.

  22. Anthropocentrism constituting the “weak anthropocentric intrinsic value” of Hargrove (1992) may be an exception.

  23. This seems in the same spirit as Bryson (2010), who argues “that it would… be wrong to build robots we owe personhood to.”.

References

  • Ananth M (2018) Bringing biology to life: an introduction to the philosophy of biology. Broadview Press, Tonawanda

    Google Scholar 

  • Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15–26

    Google Scholar 

  • Basl J, Sandler R (2013) Three puzzles regarding the moral status of synthetic organisms. In: Kaebnick GE, Murray TH (eds) Synthetic biology and morality: artificial life and the bounds of nature. MIT Press, Cambridge, pp 89–106

    Chapter  Google Scholar 

  • Bostrom N (2014/2016) Superintelligence: paths, dangers, strategies. Oxford University Press, New York

  • Boulle P (1963) Planet of the apes. Random House Publishing Group, New York (Translated from French to English by Xan Fielding)

  • Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues (Natural Language Processing), vol 8. John Benjamins Publishing Company, Amsterdam, pp 63–74

    Chapter  Google Scholar 

  • Callicott JB (1984) Non-anthropocentric value theory and environmental ethics. Am Philos Q 21(4):299–309

    Google Scholar 

  • Coeckelbergh M (2009) Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI Soc 24:181–189

    Article  Google Scholar 

  • Coeckelbergh M (2010) Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12:235–241

    Article  Google Scholar 

  • Coeckelbergh M (2012) Growing moral relations: critique of moral status ascription. Palgrave Macmillan, New York

    Book  Google Scholar 

  • Coeckelbergh M, Gunkel DJ (2014) Facing animals: a relational, other-oriented approach to moral standing. J Agric Environ Ethics 27:715–733

    Article  Google Scholar 

  • Coeckelbergh M, Gunkel DJ (2016) Response to “The problem of the question about animal ethics” by Michal Piekarski. J Agric Environ Ethics 29:717–721

    Article  Google Scholar 

  • Cole S (Producer) (2014) Operation maneater. Episode 1: Crocodile. Windfall Films

  • Darling K (2017) “Who’s Johnny?” anthropomorphic framing in human-robot interaction, integration, and policy. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence (Ch. 12). Oxford University Press, New York

    Google Scholar 

  • DeGrazia D (2002) Animal rights: a very short introduction. Oxford University Press, New York

    Book  Google Scholar 

  • DesJardins JR (2015) Biocentrism. In: “The Editors of Encyclopaedia Britannica” (eds) Encyclopaedia Britannica. Encyclopaedia Britannica, Inc., Chicago. https://www.britannica.com/topic/biocentrism. Retrieved 16 Jan 2020

  • Faria C, Paez E (2014) Anthropocentrism and speciesism: conceptual and normative issues. Revista de Bioetica y Derecho 32:82–90

    Google Scholar 

  • Floridi L (2008) Information ethics: its nature and scope. In: Van Den Hoven J, Weckert J (eds) Information technology and moral philosophy (Ch. 3). Cambridge University Press, New York

    Google Scholar 

  • Floridi L, Sanders JW (2004) On the morality of artificial agents. Minds Mach 14:349–379

    Article  Google Scholar 

  • Floridi L, Taddeo M (2018) Don’t grant robots legal personhood. Nature 557:309

    Article  Google Scholar 

  • Gerdes A (2015) The issue of moral consideration in robot ethics. ACM SIGCAS Comput Soc 45(3):274–279

    Article  Google Scholar 

  • Gunkel D (2007) Thinking otherwise: ethics, technology and other subjects. Ethics Inf Technol 9:165–177

    Article  Google Scholar 

  • Gunkel D (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge

    Book  Google Scholar 

  • Gunkel D (2013) Review of Mark Coeckelbergh’s Growing Moral Relations (Palgrave, 2012). Ethics Inf Technol 15(3):239–241

    Article  Google Scholar 

  • Gunkel D (2014) A vindication of the rights of machines. Philos Technol 27:113–132

    Article  Google Scholar 

  • Gunkel D (2018a) The other question: can and should robots have rights? Ethics Inf Technol 20(2):87–99

    Article  Google Scholar 

  • Gunkel D (2018b) Robot rights. MIT Press, Cambridge

    Book  Google Scholar 

  • Hargrove G (1992) Weak anthropocentric intrinsic value theory. Monist 75:183–207

    Article  Google Scholar 

  • Kaufman F (1994) Machines, sentience, and the scope of morality. Environ Ethics 16(1):57–70

    Article  Google Scholar 

  • Lagerspetz O (2007) Review of the book The triumph of practice over theory in ethics, by J. Sterba. Philos Investig 30(2):188–191

    Article  Google Scholar 

  • Leopold A (1949/1977/2010). A Sand County almanac: The land ethic. In: Marino G (ed) Ethics: the essential writings (Reprinted from A Sand County almanac, pp. 201–206, 1949/1977, Oxford University Press, New York). Modern Library, New York, pp 487–505

  • Lewontin RC (1998) The evolution of cognition: questions we will never answer. In: Scarborough D, Sternberg S (eds) Methods, models, and conceptual issues: an invitation to cognitive science, vol 4. The MIT Press, Cambridge, pp 106–132

    Google Scholar 

  • Prodhan G (2016) Europe’s robots to become ‘electronic persons’ under draft plan. Reuters.com. (Science News: June 21, 2016). https://www.reuters.com/article/us-europe-robotics-lawmaking/europes-robots-to-become-electronic-persons-under-draft-plan-idUSKCN0Z72AY Retrieved 8 Mar 2018

  • Rae G (2016) Anthropocentrism. In: ten Have H (ed) Encyclopedia of global bioethics. Springer, Cham

    Google Scholar 

  • Regan T (1985) The case for animal rights. In: Singer P (ed) In defense of animals. Basil Blackwell, New York, pp 13–26

    Google Scholar 

  • Rolston H III (1975) Is there an ecological ethic? Ethics 85(2):93–109

    Article  Google Scholar 

  • Scheessele MR (2018) A framework for grounding the moral status of intelligent machines. In: Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES’18), February 2–3, 2018, New Orleans. ACM, New York

  • Singer P (1974) All animals are equal. Philos Exch 5(1):103–116

    Google Scholar 

  • Singer P (1985) Ethics and the new animal liberation movement. In: Singer P (ed) In defense of animals. Basil Blackwell, New York, pp 1–10, 209–211

  • Sterba JP (2005) The triumph of practice over theory in ethics. Oxford University Press, New York

    Book  Google Scholar 

  • Tavani HT (2018) Can social robots qualify for moral consideration? Reframing the question about robot rights. Information. https://doi.org/10.3390/info9040073

    Article  Google Scholar 

  • Taylor PW (1981/2010) The ethics of respect for nature. In: Vaughn L (ed) Doing ethics: moral reasoning and contemporary issues, 2nd edn. (Reprinted from Environmental Ethics, 3(3), pp. 197–218 (edited), 1981) W.W. Norton & Company, New York, pp 512–526

  • Thompson A (2017) Anthropocentrism: humanity as peril and promise. In: Gardiner SM, Thompson A (eds) The Oxford handbook of environmental ethics. Oxford University Press, New York, pp 77–90

    Google Scholar 

  • Torrance S (2008) Ethics and consciousness in artificial agents. AI Soc 22:495–521

    Article  Google Scholar 

  • Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, New York

    Book  Google Scholar 

Download references

Acknowledgements

I wish to thank Indiana University South Bend for the Faculty Research Grant that supported this research. I especially want to thank Mahesh Ananth for many lively discussions and valuable suggestions on several drafts of this paper.

Funding

This research was funded by an Indiana University South Bend Faculty Research Grant.

Author information

Authors and Affiliations

Authors

Contributions

Sole-authored paper.

Corresponding author

Correspondence to Michael R. Scheessele.

Ethics declarations

Conflict of interest

The author declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Scheessele, M.R. The hard limit on human nonanthropocentrism. AI & Soc 37, 49–65 (2022). https://doi.org/10.1007/s00146-021-01182-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01182-4

Keywords

Navigation