Skip to main content
Log in

Machine Ethics, Allostery and Philosophical Anti-Dualism: Will AI Ever Make Ethically Autonomous Decisions?

  • Social Science and Public Policy
  • Published:
Society Aims and scope Submit manuscript

Abstract

Essentially, the area of ​​research into the ethics of artificial intelligence is divided into two main areas. One part deals with creating and applying ethical rules and standards. This area formulates recommendations that should respect fundamental rights, applicable regulations and the main principles and values, ensuring the ethical purpose of AI while ensuring its technical robustness and reliability. The second strand of research into AI ethics addresses the question of whether and how robots and AI platforms can behave ethically autonomously. The question of whether ethics can be “algorithized” depends on how AI developers understand ethics and on the adequacy of their understanding of ethical issues and methodological challenges in this area. There are four basic problem areas that developers of machines and platforms containing advanced AI algorithms are confronted with – lack of ethical knowledge, pluralism of ethical methods, cases of ethical dilemmas, and machine distortion. Knowledge of these and similar problems can help programmers and researchers avoid pitfalls and build better moral machines. Unfortunately, discussions in areas that should help to research the field of AI ethics, such as philosophy of mind or general ethics, are now hopelessly infused with a number of autotelic philosophical distinctions and thought experiments. When asked whether machines could become fully ethically autonomous in the near future, most philosophers and ethicists answer that they could not, because AI has no free will and is unable to realize phenomenal consciousness. Therefore, the main proposition of this text is that questions about the ethics of autonomous intelligent systems and AI platforms evolving over time through learning from data (Machine Ethics) cannot be answered by the concepts and thought experiments of the philosophy of mind and general ethics. These instruments are closed to the possibility of empirical falsification, use special sci-fi tools, are based on faulty analogies, transfer the burden of proof to the counterparty without justification, and usually end in an epistemological fiasco. Therefore, they do not bring any added value. Finally, let us stop analysing and overcoming these infertile philosophical distinctions and leave them at their own mercy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Further Reading

  • Allen, C., Wallach, W., & Smit, I. 2006. Why machine ethics? IEEE Intelligent Systems, 21(4), 12–17. https://doi.org/10.1109/MIS.2006.83.

    Article  Google Scholar 

  • Allen, C., Smit, I., & Wallach, W. 2005. Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155. https://doi.org/10.1007/s10676-006-0004-4.

    Article  Google Scholar 

  • Allen, C., Varner, G., & Zinser, J. 2000. Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261. https://doi.org/10.1080/09528130050111428.

    Article  Google Scholar 

  • Allen, C., & Wallach, W. 2011. Moral machines: Contradition in terms of abdication of human responsibility? In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 55–68). Cambridge: MIT Press.

    Google Scholar 

  • Anderson, M. & Anderson, S. 2011. Machine ethics. Cambridge University Press, Cambridge.

    Book  Google Scholar 

  • Anderson, M., & Anderson, S. 2007. Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–26.

    Google Scholar 

  • Anderson, M., & Anderson, S. 2010. Robot be good: A call for ethical autonomous machines. Scientific American, 303(4), pp. 15–24. https://franz.com/success/customer_apps/artificial_intelligence/EthEl/robot-be-good.PDF, https://doi.org/10.1038/scientificamerican1010-72.

  • Boddington, P. 2017. Towards a Code of Ethics for Artificial Intelligence (Artificial Intelligence: Foundations, Theory, and Algorithms), Springer; 1st ed.

  • Boden, A. M. 2016. AI: Its Nature and Future, 1st Edition, Oxford University Press.

  • Bor, D. 2012. The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning (New York: Basic Books).

    Google Scholar 

  • Bryson, J. 2008. Robots should be slaves. In Y. Wilks (Ed.), Close Engagements with artificial companions: Key social, psychological, ethical and design issue (pp. 63–74). Amsterdam: John Benjamins Publishing. Retrieved from https://books.google.nl/books?id=EPznZHeG89cC. Accessed 7 Mar 2017.

  • Dennett, D. 1991. Consciousness Explained. New York – Boston – London, Little, Brown & Co.

    Google Scholar 

  • Dennett, D. 1988. Quining Qualia. Marcel, A. – Bisiach, E. (eds.), Consciousness in Contemporary Science. New York, Oxford University Press.

  • Dennett, D. 1998. Animal Consciousness: What Matters and Why. Brainstorms. Cambridge, Mass., The MIT Press.

  • Dignum, V. 2017. Responsible Artificial intelligence: Designing AI for human values, ITU Journal: ICT Discoveries, Special Issue No. 1, 25 Sept. 2017, https://www.itu.int/en/journal/001/Documents/itu2017-1.pdf.

  • Domingos, P. 2015. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, Basic Books.

  • Gunkel, D. J. 2014. A vindication of the rights of machines. Philosophy & Technology, 27(1), pp. 113–132. https://doi.org/10.1007/s13347-013-0121-z.

    Article  Google Scholar 

  • Hesam N. Motlagh, James O. Wrabl, Jing Li, and Vincent J. Hilser. 2014. The ensemble nature of allostery, Nature; 508(7496): pp. 331–339., doi:https://doi.org/10.1038/nature13001.

    Article  Google Scholar 

  • Chalmers, D. 1995. Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, 2, 3, pp. 200–219.

  • Chalmers, D. 1996. The Conscious Mind. Oxford, Oxford University Press.

    Google Scholar 

  • Koch, Ch. 2004. The Quest for Consciousness: A Neurobiological Approach. Roberts & Company Publishers.

  • Lin, P., Abney, K., Bekey, G. 2012. Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge.

    Google Scholar 

  • Lore T., Galen M. Reich, Xinyu Zhang, Dinghe Wang,Graeme E. Smith, Zeng Tao, Raja Syamsul Azmir Bin, Raja Abdullah, Mikhail Cherniakov, Christopher J. Baker, Daniel Kish, Michail Antoniou. 2017. Mouth-clicks used by blind expert human echolocators – signal description and model based signal synthesis, PLOS Computational Biology, Published: August 31, https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005670.

  • Moor, J. H. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.80.

    Article  Google Scholar 

  • Moor, J. 2009. Four kinds of ethical robots. Philosophy Now, (72), 12–14. Retrieved from https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots.

  • Nagel, T. 1974. What Is It Like to Be a Bat? Philosophical Review, 4, pp. 435–450.

    Article  Google Scholar 

  • Norton, J. D. 2004a. On Th ought Experiments: Is Th ere More to the Argument? Philosophy of Science 71, n. 5, pp. 1139–1151. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.591.7268&rep=rep1&type=pdf.

  • Norton, J. D. 2004b. Why Thought Experiments Do Not Transcend Empiricism. In Contemporary Debates in Philosophy of Science, ed. Christopher Hitchcock, pp. 44–66. Oxford: Blackwell.

    Google Scholar 

  • Ramachandran V.S. 2004. A Brief Tour of Human Consciousness: From Impostor Poodles to Purple Numbers. Pi Press.

  • Ramachandran V.S. 2011. The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human, W. W. Norton & Company; 1 edition.

  • Wallach, W. 2007. Implementing moral decision making faculties in computers and robots. AI & Society, 22(4), 463–475. https://doi.org/10.1007/s00146-007-0093-6.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomas Hauer.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hauer, T. Machine Ethics, Allostery and Philosophical Anti-Dualism: Will AI Ever Make Ethically Autonomous Decisions?. Soc 57, 425–433 (2020). https://doi.org/10.1007/s12115-020-00506-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12115-020-00506-2

Keywords

Navigation