Advertisement

Emotion, Artificial Intelligence, and Ethics

  • Kevin LaGrandeur
Chapter
Part of the Topics in Intelligent Engineering and Informatics book series (TIEI, volume 9)

Abstract

The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill – or at least imitate – human emotion in Artificial Intelligence (AI), and also to induce emotions toward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emotions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more significant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.

Keywords

artificial intelligence affective robotics ethics artificial emotions empathic AI artificial conscience 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Yudkowsky, E.: Creating friendly AI 1.0: The analysis and design of benevolent goal architectures (2001), http://intelligence.org/files/CFAI.pdf
  2. 2.
    Muehlhauser, L., Helm, L.: The singularity and machine ethics. In: Eden, A.H., Moor, J.H., Soraker, J.H., Steinhart, E. (eds.) Singularity Hypotheses. The Frontiers Collection, pp. 101–126. Springer, Heidelberg (2012)Google Scholar
  3. 3.
    Wendell Wallach, C.A.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press (2009)Google Scholar
  4. 4.
    Anderson, M., Anderson, S.L. (eds.): Machine Ethics. Cambridge University Press (2011)Google Scholar
  5. 5.
    Lin, P., Abney, K., Bekey, G.A. (eds.): Robot Ethics: The Ethical and Social Implications of Robotics. The MIT Press (2012)Google Scholar
  6. 6.
    Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots: Concepts, design, and applications. Technical report, The Robotics Institute, Carnegie Mellon University (2002)Google Scholar
  7. 7.
    Guizzo, E.: 6.5 million robots now inhabit the earth. IEEE Spectrum (2008)Google Scholar
  8. 8.
    International Federation of Robotics Statistical Department: World robotics–industrial robots 2012: Executive summary (2012), http://www.worldrobotics.org/uploads/media/Executive_Summary_WR_2012.pdf
  9. 9.
    U.S. Army SBIR Solicitation 07.2, Topic A07-032: Multi-agent based small unit effects planning and collaborative engagement with unmanned systems (2007), http://www.sbir.gov/sbirsearch/detail/212294
  10. 10.
    Arkin, R.C.: Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture. Technical report, Georgia Institute of Technology (2007)Google Scholar
  11. 11.
    Shachtman, N.: Robot cannon kills 9, wounds 14. Wired Magazine (2007)Google Scholar
  12. 12.
    Rossler, O.E.: Nonlinear dynamics, artificial cognition and galactic export. In: Sixth International Conference on Computing Anticipatory Systems, CASYS 2003, pp. 47–67 (2003)Google Scholar
  13. 13.
    Breazeal, C., Scassellati, B.: Infant-like social interactions between a robot and a human caregiver. Adaptive Behaviour 8(1), 49–74 (2000)CrossRefGoogle Scholar
  14. 14.
    Breazeal, C., Scassellati, B.: Robots that imitate humans. Trends in Cognitive Sciences 6(11), 481–487 (2002)CrossRefGoogle Scholar
  15. 15.
    Breazeal, C.: Emotive qualities in lip-synchronized robot speech. Advanced Robotics 17(2), 97–113 (2003)CrossRefGoogle Scholar
  16. 16.
    Breazeal, C.: Designing Sociable Robots. The MIT Press, Cambridge (2002)Google Scholar
  17. 17.
    MIT media lab personal robots group, http://robotic.media.mit.edu/projects/projects.html
  18. 18.
    Gazzola, V., Rizzolatti, G., Wicker, B., Keysers, C.: The anthropomorphic brain: The mirror neuron system responds to human and robotic actions. NeuroImage 35(4), 1674–1684 (2007)CrossRefGoogle Scholar
  19. 19.
    Oberman, L.M., McCleery, J.P., Ramachandran, V.S., Pineda, J.A.: EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomputing 70(13-15), 2194–2203 (2007)CrossRefGoogle Scholar
  20. 20.
    Breazeal, C., Buchsbaum, D., Gray, J., Gatenby, D., Blumberg, B.: Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Artificial Life 11(1-2), 31–62 (2005)CrossRefGoogle Scholar
  21. 21.
    Castellano, G., Leite, I., Paiva, A., McOwan, P.W.: Affective teaching: learning more effectively from empathic robots. Awareness magazine: Self-Awareness in Autonomic Systems (2012)Google Scholar
  22. 22.
    Leite, I., Castellano, G., Pereira, A., Martinho, C., Paiva, A.: Modelling empathic behaviour in a robotic game companion for children: An ethnographic study in real-world settings. In: Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, HRI 2012, pp. 367–374. ACM (2012)Google Scholar
  23. 23.
    Kaliouby, R.E., Robinson, P.: Mind reading machines: Automated inference of cognitive mental states from video. In: Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics, pp. 682–688. IEEE (2004)Google Scholar
  24. 24.
    Glass, I.: Furrbidden knowledge (2011), http://www.radiolab.org/2011/may/31/furbidden-knowledge/
  25. 25.
    Humans feel empathy for robots: fMRI scans show similar brain function when robots are treated the same as humans. Science Daily (2011), http://www.sciencedaily.com/releases/2013/04/130423091111.htm
  26. 26.
    Scheutz, M.: The inherent dangers of unidirectional emotional bonds between humans and social robots. In: Lin, P., Bekey, G., Abney, K. (eds.) Anthology on Robo-Ethics, pp. 205–221 (2012)Google Scholar
  27. 27.
    Kim, K.J., Lipson, H.: Towards a “theory of mind” in simulated robots. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference, pp. 2071–2076. ACM, New York (2009)CrossRefGoogle Scholar
  28. 28.
    Berger, T.W., Hampson, R.E., Song, D., Goonawardena, A., Marmarelis, V.Z., Deadwyler, S.A.: A cortical neural prosthesis for restoring and enhancing memory. Journal of Neural Engineering 8(4), 046017 (2011)Google Scholar
  29. 29.
    Hampson, R.E., Gerhardt, G.A., Marmarelis, V.Z., Song, D., Opris, I., Santos, L., Berger, T.W., Deadwyler, S.A.: Facilitation and restoration of cognitive function in primate prefrontal cortex by a neuroprosthesis that utilizes minicolumn-specific neural firing. Journal of Neural Engineering 9(5), 056012 (2012)Google Scholar
  30. 30.
    Hughes, J.: Compassionate AI and selfless robots: A buddhist approach. In: Lin, P., Abney, K., Bekey, G.A. (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, pp. 69–83 (2012)Google Scholar
  31. 31.
    del Val, J.: METABODY - media embodiment tékhne and bridges of diversity (2012), http://www.metabody.eu/

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Kevin LaGrandeur
    • 1
    • 2
  1. 1.New York Institute of TechnologyNew YorkUSA
  2. 2.Institute for Ethics and Emerging TechnologiesHartfordUSA

Personalised recommendations