Artificial Morality. Concepts, Issues and Challenges

Abstract

Artificial morality is an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. Since this will have a deep impact on our lives it is important to discuss the possibility of artificial morality and its implications for individuals and society. Starting with some examples of artificial morality, the article turns to conceptual issues that are important for delineating the possibility and scope of artificial morality, in particular, what an artificial moral agent is; how morality should be understood in the context of artificial morality; and how human and artificial morality compare. Outlined next is how moral capacities can be implemented in artificial systems in general and in more detail with respect to an elder care system. On the basis of these findings some of the arguments that can be found in public discourse about artificial morality will be reviewed and the prospects and challenges of artificial morality are discussed.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    One can find these and some more morally intricate scenarios for self-driving vehicles at http://moralmachine.mit.edu/

  2. 2.

    See: http://futureoflife.org/misc/open_letter.

Further Reading

  1. Anderson, M., & Anderson, S. L. 2011. A Prima Facie Duty Approach to Machine Ethics: Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles Through a Dialogue with Ethicists. In M. Anderson, & S. L. Anderson (Eds.), Machine Ethics (pp. 476–494). Cambridge: Cambridge University Press.

    Google Scholar 

  2. Arkin, R. C., & Ulam, P. 2009. An Ethical Adaptor: Behavioral Modification Derived from Moral Emotions. IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA-09). Daejeon, KR.

  3. Bendel, O. 2017. LADYBIRD: the Animal-Friendly Robot Vacuum Cleaner. The 2017 AAAI Spring Symposium Series. AAAI Press, Palo Alto 2017.

  4. Block, N. 1995. On a Confusion about the Function of Consciousness. Behavioral and Brain Sciences, 18, 227–247.

    Article  Google Scholar 

  5. Breazeal, C., & Scassellati, B. 2002. Robots That Imitate Humans. Trends in Cognitive Sciences, 6, 481–487.

    Article  Google Scholar 

  6. Dancy, J. 2013. Moral Particularism. The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), E. N. Zalta (ed.). URL = http://plato.stanford.edu/archives/fall2013/entries/moral-particularism/.

  7. Dennett, D. C. 1987. The Intentional Stance. Cambridge:MIT Press.

    Google Scholar 

  8. Floridi, L., & Sanders, J. W. 2004. On the Morality of Artificial Agents. Minds and Machines, 14, 349–379.

    Article  Google Scholar 

  9. Fong, T., Illah, N., & Dautenhahn, K. 2002. A Survey of Socially Interactive Robots: Concepts, Design, and Applications. Technical Report CMU-RI-TR, pp. 2–29.

  10. Foot, P. 1978. The Problem of Abortion and the Doctrine of Double Effect. In P. Foot (Ed.), Virtues and Vices (pp. 19–32). Oxford:Basil Blackwell.

  11. Frankena, W. K. 1966. The Concept of Morality. The Journal of Philosophy, 63, 688–696.

    Article  Google Scholar 

  12. Frankfurt, H. G. 1971. Freedom of the Will and the Concept of a Person. The Journal of Philosophy, 68, 5–20.

    Article  Google Scholar 

  13. Froese, T., & Di Paolo, E. 2010. Modelling Social Interaction As Perceptual Crossing: An Investigation into the Dynamics of the Interaction Process. Connection Science, 22, 43–68.

    Article  Google Scholar 

  14. Horgan, T., & Timmons, M. 2009. What Does the Frame Problem Tell us About Moral Normativity? Ethical Theory and Moral Practice, 12, 25–51.

    Article  Google Scholar 

  15. Levy, N. 2014. Consciousness and Moral Responsibility. Oxford:Oxford University Press.

    Google Scholar 

  16. List, C., & Pettit, P. 2011. Group agency: the possibility, design, and status of corporate agents. Oxford: Oxford University Press.

  17. McCarthy, J., & Hayes, P. J. 1969. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer, & D. Michie (Eds.), Machine Intelligence (pp. 463–502). Edinburgh: Edinburgh University Press.

    Google Scholar 

  18. Misselhorn, C., et al. 2013. Ethical Considerations Regarding the Use of Social Robots in the Fourth Age. In GeroPsych–The Journal of Gerontopsychology and Geriatric Psychiatry, 26 (Special Issue: Emotional and Social Robots for Aging Well?), 121–133.

  19. Rawls, J. 1993. Political Liberalism. New York:Columbia University Press.

    Google Scholar 

  20. Searle, J. R. 1980. Minds, Brains, and Programs. Behavioral and Brain Sciences, 3, 417–458.

    Article  Google Scholar 

  21. Wallach, W., & Allen, C. 2009. Moral Machines: Teaching Robots Right from Wrong. New York; Oxford:Oxford University Press.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Catrin Misselhorn.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Misselhorn, C. Artificial Morality. Concepts, Issues and Challenges. Soc 55, 161–169 (2018). https://doi.org/10.1007/s12115-018-0229-y

Download citation

Keywords

  • Artificial morality
  • Artificial moral agents
  • Moral implementation
  • Self-driving cars
  • Care robots