Skip to main content

Advertisement

Log in

Artificial Morality. Concepts, Issues and Challenges

  • Social Science and Public Policy
  • Published:
Society Aims and scope Submit manuscript

Abstract

Artificial morality is an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. Since this will have a deep impact on our lives it is important to discuss the possibility of artificial morality and its implications for individuals and society. Starting with some examples of artificial morality, the article turns to conceptual issues that are important for delineating the possibility and scope of artificial morality, in particular, what an artificial moral agent is; how morality should be understood in the context of artificial morality; and how human and artificial morality compare. Outlined next is how moral capacities can be implemented in artificial systems in general and in more detail with respect to an elder care system. On the basis of these findings some of the arguments that can be found in public discourse about artificial morality will be reviewed and the prospects and challenges of artificial morality are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. One can find these and some more morally intricate scenarios for self-driving vehicles at http://moralmachine.mit.edu/

  2. See: http://futureoflife.org/misc/open_letter.

Further Reading

  • Anderson, M., & Anderson, S. L. 2011. A Prima Facie Duty Approach to Machine Ethics: Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles Through a Dialogue with Ethicists. In M. Anderson, & S. L. Anderson (Eds.), Machine Ethics (pp. 476–494). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Arkin, R. C., & Ulam, P. 2009. An Ethical Adaptor: Behavioral Modification Derived from Moral Emotions. IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA-09). Daejeon, KR.

  • Bendel, O. 2017. LADYBIRD: the Animal-Friendly Robot Vacuum Cleaner. The 2017 AAAI Spring Symposium Series. AAAI Press, Palo Alto 2017.

  • Block, N. 1995. On a Confusion about the Function of Consciousness. Behavioral and Brain Sciences, 18, 227–247.

    Article  Google Scholar 

  • Breazeal, C., & Scassellati, B. 2002. Robots That Imitate Humans. Trends in Cognitive Sciences, 6, 481–487.

    Article  Google Scholar 

  • Dancy, J. 2013. Moral Particularism. The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), E. N. Zalta (ed.). URL = http://plato.stanford.edu/archives/fall2013/entries/moral-particularism/.

  • Dennett, D. C. 1987. The Intentional Stance. Cambridge:MIT Press.

    Google Scholar 

  • Floridi, L., & Sanders, J. W. 2004. On the Morality of Artificial Agents. Minds and Machines, 14, 349–379.

    Article  Google Scholar 

  • Fong, T., Illah, N., & Dautenhahn, K. 2002. A Survey of Socially Interactive Robots: Concepts, Design, and Applications. Technical Report CMU-RI-TR, pp. 2–29.

  • Foot, P. 1978. The Problem of Abortion and the Doctrine of Double Effect. In P. Foot (Ed.), Virtues and Vices (pp. 19–32). Oxford:Basil Blackwell.

  • Frankena, W. K. 1966. The Concept of Morality. The Journal of Philosophy, 63, 688–696.

    Article  Google Scholar 

  • Frankfurt, H. G. 1971. Freedom of the Will and the Concept of a Person. The Journal of Philosophy, 68, 5–20.

    Article  Google Scholar 

  • Froese, T., & Di Paolo, E. 2010. Modelling Social Interaction As Perceptual Crossing: An Investigation into the Dynamics of the Interaction Process. Connection Science, 22, 43–68.

    Article  Google Scholar 

  • Horgan, T., & Timmons, M. 2009. What Does the Frame Problem Tell us About Moral Normativity? Ethical Theory and Moral Practice, 12, 25–51.

    Article  Google Scholar 

  • Levy, N. 2014. Consciousness and Moral Responsibility. Oxford:Oxford University Press.

    Book  Google Scholar 

  • List, C., & Pettit, P. 2011. Group agency: the possibility, design, and status of corporate agents. Oxford: Oxford University Press.

  • McCarthy, J., & Hayes, P. J. 1969. Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer, & D. Michie (Eds.), Machine Intelligence (pp. 463–502). Edinburgh: Edinburgh University Press.

    Google Scholar 

  • Misselhorn, C., et al. 2013. Ethical Considerations Regarding the Use of Social Robots in the Fourth Age. In GeroPsych–The Journal of Gerontopsychology and Geriatric Psychiatry, 26 (Special Issue: Emotional and Social Robots for Aging Well?), 121–133.

  • Rawls, J. 1993. Political Liberalism. New York:Columbia University Press.

    Google Scholar 

  • Searle, J. R. 1980. Minds, Brains, and Programs. Behavioral and Brain Sciences, 3, 417–458.

    Article  Google Scholar 

  • Wallach, W., & Allen, C. 2009. Moral Machines: Teaching Robots Right from Wrong. New York; Oxford:Oxford University Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catrin Misselhorn.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Misselhorn, C. Artificial Morality. Concepts, Issues and Challenges. Soc 55, 161–169 (2018). https://doi.org/10.1007/s12115-018-0229-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12115-018-0229-y

Keywords

Navigation