Skip to main content

Advertisement

Log in

Why robots should not be treated like animals

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Many roboticists talk about robots “feeling” or “sensing” the environment because these machines are endowed with sensors, but their discourse is metaphorical.

  2. Some scholars, like Solaiman, turn to animals as a model to deny that robots should be granted personhood (Solaiman 2017). The scholar uses a case in which a judge denied personhood to chimpanzees to argue against the idea of conferring legal personhood to robots. This all-or-nothing approach on personhood (either animals and robots have all the rights and duties connected with personhood or they don’t have any) may be too coarse-grained for our analysis, since it begs the question on why there are laws against animal cruelty even though animals are not considered persons.

  3. Strict liability means liability does not depend on intent to do harm or negligence. With strict liability, one is liable regardless of the fact that one had no ill intent and may have taken precautions to prevent the harm. By contrast, negligence involves failure to take proper care in doing something.

  4. In a later paper, Kelley et al. (2010) further modify the Robots as Animals framework by specifying that the important distinction is between robots that are dangerous and robots that are safe. Using an analogy with dangerous dogs, they suggest that bans or restrictions might be appropriate for dangerous robots.

  5. Currently there are a number of codes or standards for robotics such as the EPSRC Principles of Robotics that have a thrust in this direction but are not specific. For example, the 4th rule in EPSRC’s Principles for Designers, Builders, and Users of Robots is that: “Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” (EPSRC 2010).

References

  • Anderson, C. A. (1997). Effects of violent movies and trait hostility on hostile feelings and aggressive thoughts. Aggressive Behavior, 23, 161–178.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge: Cambridge University Press.

  • Asaro, P. M. (2012). A body to kick, but still no soul to damn: Legal perspectives on robotics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics. Cambridge: MIT Press.

    Google Scholar 

  • Asaro, P. M. (2016). The liability problem for autonomous artificial agents. Ethical and Moral Considerations in Non-Human Agents, 2016 AAAI Spring Symposium Series.

  • Ashrafian, H. (2015). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics, 21(2), 317–326.

    Article  Google Scholar 

  • Asimov, I. (1993). Forward the foundation. London: Doubleday.

    Google Scholar 

  • Borenstein, J., & Pearson, Y. (2010). Robot caregivers: Harbingers of expanded freedom for all? Ethics and Information Technology, 12(3), 277–288.

    Article  Google Scholar 

  • Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25, 273–291.

    Article  Google Scholar 

  • Bushman, B. J., & Anderson, C. A. (2009). Comfortably numb: Desensitizing effects of violent media on helping others. Psychological Science, 20(3), 273–277.

    Article  Google Scholar 

  • Calverley, D. (2006). J. Android science and animal rights, does an analogy exist? Connection Science, 18(4), 403–417.

    Article  Google Scholar 

  • Calverley, D. J. (2005). Android science and the animal rights movement: Are there analogies. In Cognitive Sciences Society Workshop, Stresa, Italy, pp. 127–136.

  • Chilvers, J. (2013). Reflexive engagement? Actors, learning, and reflexivity in public dialogue on science and technology. Science Communication, 35(3), 283–310.

    Article  Google Scholar 

  • Chin, M., Sims, V., Clark, B., & Lopez, G. (2004). Measuring individual differences in anthropomorphism toward machines and animals. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 48, pp. 1252–1255.

    Article  Google Scholar 

  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221.

    Article  Google Scholar 

  • Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In R. Calo, A. M. Froomkin & I. Kerr (Eds.), Robot Law. Cheltenham: Edward Elgar.

    Google Scholar 

  • Delvaux, M. (2016). Draft Report with recommendations to the Commission on Civil Law Rules on Robotics. European Parliament Committee on Legal Affairs Report 2015/2103 (INL).

  • Dick, P. K. (1968). Do androids dream of electric sheep? London: Doubleday.

    Google Scholar 

  • Elbogen, E. B., Johnson, S. C., Wagner, H. R., Sullivan, C., & Taft, C. T. (2014). and J. C. Beckham. Violent behaviour and post-traumatic stress disorder in US Iraq and Afghanistan veterans. The British Journal of Psychiatry, 204(5), 368–375.

    Article  Google Scholar 

  • Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886.

    Article  Google Scholar 

  • EPSRC. (2010). Principles of robotics, engineering and physical sciences research council. Retrieved April 2018 from https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/.

  • Eyssel, F., Kuchenbrandt, D., Bobinger, S., De Ruiter, L., & Hegel, F. (2012). “If you sound like me, you must be more human”: On the interplay of robot and user features on human–robot acceptance and anthropomorphism. In Proceedings of the 7th annual ACM/IEEE International Conference on Human–Robot Interaction (HRI’12), pp. 125–126.

  • Ford, M. (2015). The rise of the robots: Technology and the threat of mass unemployment. London: Oneworld Publications.

    Google Scholar 

  • Fussell, S. R., Kiesler, S., Setlock, L. D., & Yew, V. (2008). How people anthropomorphize robots. In Proceedings of the 3rd ACM/IEEE International Conference on Human–Robot Interaction (HRI 2008), pp. 145–152.

  • Future of Life Institute. (2015). Autonomous weapons: An open letter from Ai & robotics researchers. Retrieved August 2017 from https://www.futureoflife.org/ai-open-letter/.

  • Garland, A. (2015). Ex Machina [Motion Picture]. Universal City: Universal Pictures.

    Google Scholar 

  • Gentner, D., & Forbus, K. D. (2011). Computational models of analogy. WIREs Cognitive Science, 2, 266–276.

    Article  Google Scholar 

  • Gibson, W. (1996). Idoru. New York: Viking Press.

    Google Scholar 

  • Glas, D. F., Minato, T., Ishi, C. T., Kawahara, T., & Ishiguro, H. (2016). “ERICA: The ERATO Intelligent Conversational Android.” Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 22–29.

  • Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2015). Developing automated deceptions and the impact on trust. Philosophy & Technology, 28(1), 91–105.

    Article  Google Scholar 

  • Gunkel, D. J. (2012). The machine question. Cambridge: MIT Press.

    Google Scholar 

  • Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113–132.

    Article  Google Scholar 

  • Gunkel, D. J. (2017). The other question: Can and should robots have rights? Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9442-4.

    Article  Google Scholar 

  • Hanson Robotics. (2017). Sophia. Retrieved April 2008 from http://www.hansonrobotics.com/robot/sophia.

  • Hauskeller, M. (2016). Mythologies of transhumanism. Basingstoke: Palgrave McMillan.

    Book  Google Scholar 

  • Hogan, K. (2017). Is the machine question the same question as the animal question? Ethics and Information Technology, 19, 29–38.

    Article  Google Scholar 

  • Holyoak, K. J., & Koh, K. (1987). Surface and structural similarityin analogical transfer. Memory & Cognition, 15, 332–340.

    Article  Google Scholar 

  • Johnson, D. G., & Verdicchio, M. (2017). AI anxiety. Journal of the Association for Information Science and Technology, 68(9), 2267–2270.

    Article  Google Scholar 

  • Jonze, S. Her [Motion Picture], Warner Bros., Burbank, 2013.

  • Kant, I. (1997). Lectures on ethics, In: P. Heath and J. B. Schneewind (Eds.), translated by P Heath.

  • Kelley, R., Schaerer, E., Gomez, M., & Nicolescu, M. (2010). Liability in robotics: An international perspective on robots as animals. Advanced Robotics, 24(13), 1861–1871.

    Article  Google Scholar 

  • Kuehn, J., & Haddadin, S. (2017). An artificial robot nervous system to teach robots how to feel pain and reflexively react to potentially damaging contacts. IEEE Robotics and Automation Letters, 2(1), 72–79.

    Article  Google Scholar 

  • Kurzweil, R. (2005). The Singularity is near: When humans transcend biology. London: Penguin Books.

    Google Scholar 

  • Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge: Harvard University Press.

    Google Scholar 

  • Levine, S., Pastor, P., Krizhevsky, A., & Quillen, D. (2016). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Google Preliminary Report. Retrieved from https://arxiv.org/pdf/1603.02199v4.pdf.

  • Levy, D. (2008). Love and Sex with Robots. New York: Harper Perennial.

    Google Scholar 

  • Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1(3), 209–216.

    Article  Google Scholar 

  • Liang, A., Piroth, I., Robinson, H., MacDonald, B., Fisher, M., Nater, U. M., Skoluda, N., & Broadbend, E. (2017). A pilot randomized trial of a companion robot for people with dementia living in the community. Journal of the American Medical Directors Association. Retrieved August 2017 from https://doi.org/10.1016/j.jamda.2017.05.019.

    Article  Google Scholar 

  • Lin, P., Abney, K., & Bekey, G. A. (2011). Robot Ethics: The ethical and social implications of robotics. Cambridge: MIT Press.

    Google Scholar 

  • MacLennan, B. (2013). Cruelty to robots? The hard problem of robot suffering. Proceedings of the 2013 Meeting of the International Association for Computing and Philosophy (IACAP).

  • MacManus, D., Rona, R., Dickson, H., Somaini, G., Fear, N., & Wessely, S. (2015). Aggressive and violent behavior among military personnel deployed to Iraq and Afghanistan: Prevalence and link with deployment and combat exposure. Epidemiologic Reviews, 37(1), 196–212.

    Article  Google Scholar 

  • Markey, P. M., French, J. E., & Markey, C. N. (2014). Violent movies and severe acts of violence: Sensationalism versus science. Human Communication Research, 41(2), 155–173.

    Article  Google Scholar 

  • McNally, P., & Inayatullah, S. (1988). The rights of robots: Technology, culture and law in the 21st century. Futures, 20(2), 119–136.

    Article  Google Scholar 

  • Metzinger, T. (2013). Two principles for robot ethics. In E. Hilgendorf & J.-P. Günther (Eds.), Robotik und Gesetzgebung (pp. 247–286). Baden-Baden: Nomos.

    Google Scholar 

  • Miller, K. W. (2010). It’s not nice to fool humans. IT professional, 12(1), 51–52.

    Article  MathSciNet  Google Scholar 

  • Minsky, M. (2013). Dr. Marvin Minsky—Facing the future. Retrieved June 2017 from http://www.youtube.com/watch?v=w9sujY8Xjro.

  • Moore, A. (1989). V for Vendetta. Burbank: DC Comics.

    Google Scholar 

  • Mori, M. (1970). The uncanny valley. Energy, 7(4), 33–35.

    Google Scholar 

  • Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.

    Article  Google Scholar 

  • Novaco, R. W., & Chemtob, C. M. (2015). Violence associated with combat-related posttraumatic stress disorder: The importance of anger. Psychological Trauma: Theory, Research, Practice, and Policy, 7(5), 485.

    Article  Google Scholar 

  • Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E., & Guston, D. (2013). A framework for responsible innovation. In R. Owen, J. Bessant & M. Heintz (Eds.), Responsible innovation: Managing the responsible emergence of science and innovation in society. Chichester: Wiley.

    Chapter  Google Scholar 

  • Parisi, D. (2014). Future robots: Towards a robotic science of human beings. Amsterdam: John Benjamins Publishing.

    Book  Google Scholar 

  • Perkowitz, S. (2004). Digital people: From bionic humans to androids. Washington: Joseph Henry Press.

    Google Scholar 

  • Ramey, C. H. (2005). “For the Sake of Others”: The “Personal” Ethics of Human-Android Interaction. In Toward Social Mechanisms of Android Science: A CogSci 2005 Workshop. July 25–26, Stresa, Italy, pp. 137–148.

  • Robertson, J. (2014). Human rights vs. robot rights: Forecasts from Japan. Critical Asian Studies, 46(4), 571–598.

    Article  Google Scholar 

  • Ross, B. H. (1989). Distinguishing types of superficial similarities: Different effects on the access and use of earlier problems. Journal of Experimental Psychology: Learning, Memory and Cognition, 5, 456–468.

    Google Scholar 

  • Schaerer, E., Kelley, R., & Nicolescu, M. (2009). Robots as animals: A framework for liability and responsibility in human-robot interactions. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 72–77, IEEE.

  • Schmidt, C. T. A. (2008). Redesigning Man? In P. E. Vermaas, P. Kroes, A. Light & S. A. Moore (Eds.), Philosophy and design: From engineering to architecture (pp. 209–216). New York: Springer.

    Chapter  Google Scholar 

  • Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27–40.

    Article  Google Scholar 

  • Sharkey, N., & Sharkey, A. (2010). The crying shame of robot nannies: An ethical appraisal. Interaction Studies, 11(2), 161–190.

    Article  Google Scholar 

  • Sharkey, N., van Wynsberghe, A., Robbins, S., & Hancock, E. (2017). Our sexual future with robots. The Hague: Foundation for Responsible Robotics. Retrieved September 2017 from http://responsiblerobotics.org/wp-content/uploads/2017/07/FRR-Consultation-Report-Our-Sexual-Future-with-robots_Final.pdf.

  • Solaiman, S. M. (2017). Legal personality of robots, corporations, idols and chimpanzees: A quest for legitimacy. Artificial Intelligence and Law, 25, 155–179.

    Article  Google Scholar 

  • Spellman, B. A., & Holyoak, K. J. (1996). Pragmatics in analogical mapping. Cognitive Psychology, 31, 307–346.

    Article  Google Scholar 

  • Spennemann, D. H. (2007). R. Of great apes and robots: Considering the future(s) of cultural heritage. Futures, 39(7), 861–877.

    Article  Google Scholar 

  • Sullins, J. P. (2006). When is a robot a moral agent. International Review of Information Ethics, 6(12), 23–30.

    Google Scholar 

  • Sullins, J. P. (2011). When is a robot a moral agent? In M. Anderson & S. L. Anderson (Eds.), Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • van Rysewyk, S. (2014). Robot pain. International Journal of Synthetic Emotions, 4(2), 22–33.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mario Verdicchio.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Johnson, D.G., Verdicchio, M. Why robots should not be treated like animals. Ethics Inf Technol 20, 291–301 (2018). https://doi.org/10.1007/s10676-018-9481-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-018-9481-5

Keywords

Navigation