Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Evil and roboethics in management studies

Abstract

In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    The following offers definitions for some of the most important terms used in this document. ‘Evil’ is an action that is not simply morally wrong, but leaves no room for understanding or redemption. Evil is qualitatively, rather than merely quantitatively, distinct from mere wrongdoing. ‘Evil machine’ is a machine’s action that causes harm to humans and leaves no room for account or expiation. ‘Robot’ stands for both physical robots and virtual agents roaming within computer networks; ‘autonomous machine’ is a decision-making machine; ‘artificial intelligence’ is the ability of autonomous machines to make decisions; ‘intelligent machine’ and ‘autonomous intelligent machine’ are synonymous with ‘autonomous machine.’ ‘Machine’ is an umbrella term to cover robots and autonomous and intelligent machines. ‘Machine learning algorithm’ can be categorized as being supervised or unsupervised. Supervised algorithms can apply what has been learned in the past to new data. Unsupervised algorithms can draw inferences from datasets. An important distinction in this article is played between humans as designers and engineers, i.e., those who build the machine, and humans as users or clients, i.e., those who interact socially with the machine. The former are named ‘designers’ and ‘engineers,’ the latter ‘users,’ ‘investors,’ ‘clients,’ or, when the text moves from the specific case study to more general considerations, ‘humans’ and ‘humanoids.’ Giving human characteristics to artificial objects is a human trait called ‘to anthropomorphize.’ Biblical quotes are from the new revised standard version of the Oxford annotated Bible with Apocrypha (Croogan 2010).

References

  1. Adams G, Balfour DL (2009) Unmasking administrative evil. M.E. Sharpe, New York

  2. Allen RE (2006) Plato: the republic. Yale University Press, New Haven

  3. Arkin R (2009) Governing lethal behavior in autonomous robots. Hall/CRC, London

  4. Asimov I (1942) Runaround. Astounding Sci Fiction 29(2):94–103

  5. Bataille G (2001) Literature and evil. Marion Boyars Publishers, London

  6. Bernstein RJ (2002) Radical evil: a philosophical investigation. Polity Press, Cambridge

  7. Bostrom N (2002) Existential risks: analyzing human extinction scenarios. J Evol Technol 9:1–30

  8. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

  9. Bostrom N, Yudkowsky E (2011) The ethics of artificial intelligence. In: Ramsey William, Frankish Keith (eds) Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge, pp 316–334

  10. Calder T (2002) Towards a theory of evil: a critique of Laurence Thomas’s theory of evil acts. In: Haybron DM (ed) Earth’s abominations: philosophical studies of evil. Rodopi, New York, pp 51–61

  11. Coeckelbergh M (2009) Personal robots, appearance, and human good: a methodological reflection on roboethics. Int J Soc Robot 1(3):217–221

  12. Coeckelbergh M (2010) You, Robot: on the linguistic construction of artificial others. AI & Soc 26(1):61–69

  13. Coeckelbergh M (2012) Can we trust robots? Ethics Inf Technol 14(1):53–60

  14. Croogan MD et al (2010) The New Oxford annotated Bible with Apocrypha: new revised standard version. Oxford University Press, New York

  15. Darley JM (1992) Social organization for the production of evil. Psychol Inq 3:199–218

  16. Darley JM (1996) How organizations socialize individuals into evildoing. In: Messick David M, Tenbrunsel Ann E (eds) Codes of conduct: behavioral research into business ethics. Russell Sage Foundation, New York, pp 179–204

  17. Dennett DC (1987) The intentional stance. MIT Press, Boston

  18. Dennett Daniel (1998) When HAL kills, who’s to blame? Computer ethics. In: Stork D (ed) HAL’s legacy: 2001’s computer as dream and reality. MIT Press, Boston

  19. Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114:864

  20. Floridi L, Sanders J (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

  21. Garrard E (1998) The nature of evil. Philos Explor Int J Philos Mind Action 1(1):43–60

  22. Garrard E (2002) Evil as an explanatory concept. The Monist 85(2):320–336

  23. Geddes JL (2003) Banal evil and useless knowledge: Hannah Arendt and Charlotte Delbo on evil after the holocaust. Hypatia 18:104–115

  24. Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference, and prediction. 2nd edn. Springer, New York

  25. Irrgang B (2006) Ethical acts in robotics. Ubiquity 7(34). http://www.acm.org/ubiquity. Accessed 12 Oct 2017

  26. Johnson V, Brennan LL, Johnson VE (2004) Social, ethical and policy implications of information technology. Information Science Publishing, Hershey

  27. Kamm F (2007) Intricate ethics: rights, responsibilities, and permissible harm. Oxford University Press, Oxford

  28. Kroll JA, Huey J, Barocas S, Felten EW, Reindenberg JR, Robinson DG, Yu H (eds) (2016). Accountable algorithms. Univ PA Law Rev 165: 633

  29. Lee S, Kiesler S, Lau IY, Chiu C-Y (2005) Human mental models of humanoid robots. In: Proceedings of the 2005 IEEE international conference on robotics and automation (ICRA’05). Barcelona, April 18–22, pp 2767–2772

  30. Lin P, Abney K, Bekey GA (2014) Robot ethics: the ethical and social implications of robotics. The MIT Press, Boston

  31. Loughnan S, Haslan N (2007) Animals and androids: implicit associations between social categories and nonhumans. Psychol Sci 18:116–121

  32. Mittelstadt B, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Soc 3(2):1–21

  33. Nadeau JE (2006) Only androids can be ethical. In: Ford K, Glymour C (eds) Thinking about android epistemology. MIT Press, Boston, pp 241–248

  34. Neiman S (2002) Evil in modern thought. an alternative history of philosophy. Princeton University Press, Princeton

  35. Pasquale F (2015) The black box society: the secret algorithm behind money and information. Harvard University Press, Massachusetts

  36. Powers TM (2009) Machines and moral reasoning. Philos Now 72:15–16

  37. Powers TM (2016) Prospects for a Kantian machine. In: Wallach W, Asaro P (eds) Machine ethics and robot ethics. Ashgate Publishing, Farnham

  38. Powers A, Kiesler S, Fussell S, Torrey C (2007) Comparing a computer agent with a humanoid robot. In: Proceedings of HRI07, pp 145–152

  39. Schnall S, Cannon PR (2012) The clean conscience at work: emotions, intuitions and morality. J Manag Spiritual Relig 9(4):295–315

  40. Sofge E (2014) Robots are evil: the sci-fi myth of killer machines. Pop Sci. http://www.popsci.com/blog-network/zero-moment/robots-are-evil-sci-fi-myth-killer-machines. Accessed 13 June 2017

  41. Staub E (1989 reprinted in 1992) The roots of evil: the origins of genocide and other group violence. Cambridge University Press, Cambridge

  42. Steiner H (2002) Calibrating evil. The Monist 85(2):183–193

  43. Styhre A, Sundgren M (2003) Management is evil: management control, technoscience and saudade in pharmaceutical research. Leadersh Organ Dev J 24(8):436–446

  44. Sullins JP (2005) Ethics and artificial life: from modeling to moral agents. Ethics Inf Technol 7:139–148

  45. Sullins JP (2006) When is a robot a Moral Agent? Int Rev Inf Ethics 6(12):24–30

  46. Taddeo M (2010) Trust in technology: a distinctive and a problematic relation. Know Technol Policy 23(3–4):283–286

  47. Tang TL-P (2010) Money, the meaning of money, management, spirituality, and religion. J Manag Spiritual Relig 7(2):173–189

  48. Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460

  49. Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, New York

  50. Waytz A, Cacioppo J, Epley N (2010) ‘Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect Psychol Sci 5:219–232

  51. Zimbardo P (2007) The Lucifer effect: understanding how good people turn evil. Random House, New York

Download references

Author information

Correspondence to Enrico Beltramini.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Beltramini, E. Evil and roboethics in management studies. AI & Soc 34, 921–929 (2019). https://doi.org/10.1007/s00146-017-0772-x

Download citation

Keywords

  • Roboethics
  • Evil
  • Management
  • Anthropomorphism