Abstract
Artificial moral agents raise complex ethical questions both in terms of the potential decisions they may make as well as the inputs that create their cognitive architecture. There are multiple differences between human and artificial cognition which create potential barriers for artificial moral agency, at least as understood anthropocentrically and it is unclear that artificial moral agents should emulate human cognition and decision-making. It is conceptually possible for artificial moral agency to emerge that reflects alternative ethical methodologies without creating ontological challenges or existential crises for human moral agents.
Similar content being viewed by others
Notes
Needless to say, what constitutes “normal” versus “abnormal” reasoning is another contentious topic. For the sake of brevity, I understand the terms here in light of endpoints of a continuum of behaviors, with “abnormal” including behaviors that by their nature prevent the agent from meeting their intended goals. Just as the authors note that researchers like Turing point to manifestations of intelligent action rather than attempt to define it, I will refer to manifestations of “abnormal” cognitive processing. As an example, I worked with a patient whose spatial perception prevented him from being able to navigate hallways (he would walk into corners and injure himself) or feed himself (he would hold his sandwich a foot away from his mouth and attempt to take bites). He clearly was attempting to interact with his environment, but the results of his reasoning process prevented him from being to attain those goals.
For instance, in practicing ethics in a natural rather than controlled environment, it would seem reasonable to require an ethical agent to be able to identify the central ethical dilemma (rather than ancillary issues), identify which agents are both affected and relevant, what personal or professional obligations might exist, which ethical methodology (or methodologies) are appropriate in approaching the problem, if there are any implicit prohibitions on particular actions (like murder), what is the context of the action considered, what are the agent’s intentions, and what are the likely consequences. Each of these presents unique coding challenges or could be led astray in self-learning systems (just like in human agents).
I draw a distinction between willingness to receive care from a robot caregiver from forming a bond with a caregiver.
References
Angulo, I. (2018, March 17). Facebook and YouTube should have learned from Microsoft’s racist chatbot. Accessed November 23, 2019, https://www.cnbc.com/2018/03/17/facebook-and-youtube-should-learn-from-microsoft-tay-racist-chatbot.html.
Angwin, J., Jeff, L., Surya, M., and Lauren, K. (2016, May 23). Machine bias. Accessed November 21, 2019, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Anti-Defamation League. (n.d). Okay hand gesture. Accessed November 23, 2019, https://www.adl.org/education/references/hate-symbols/okay-hand-gesture.
Apps, M. A. J., Rushworth, M. F. S., & Chang, S. W. C. (2016). The anterior cingulate gyrus and social cognition: tracking the motivation of others. Neuron, 90, 692–707. https://doi.org/10.1016/j.neuron.2016.04.018.
Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgments. In H. Guetzkow (Ed.), Groups, Leadership and Men; Research in Human Relations (pp. 177–190). Oxford: Carnegie Press.
Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70(9), 1–70.
Bargh, J. A. (1997). The automaticity of everyday life. In The automaticity of everyday life (pp. 1–61). Mahwah: Lawrence Erlbaum Associates.
Baumeister, R. F., & Leary, M. R. (1995). The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117(3), 497–529.
Broadbent, E., Tamagawa, R., Patience, A., Knock, B., Kerse, N., Day, K., et al. (2011). Attitudes towards health-care robots in a retirement village. Australasian Journal on Ageing, 31(2), 115–120. https://doi.org/10.1111/j.1741-6612.2011.00551.x.
Camerer, C. F., Loewenstein, G., & Prelec, D. (2004). Neuroeconomics: why economics needs brains. The Scandinavian Journal of Economics, 106(3), 555–579. https://doi.org/10.1111/j.1467-9442.2004.00378.x.
Carter, C. S. (2014). Oxytocin pathways and the evolution of human behavior. Annual Review of Psychology, 65, 17–39. https://doi.org/10.1146/annurev-psych-010213-115110.
Clore, G., & Ketelaar, T. (1997). Minding our Emotions: On the Role of Automatic, Unconscious Affect. In R. S. Wyer (Ed.), The Automaticity of Everyday Life (pp. 105–120). Mahwah: Lawrence Erlbaum Associates.
Damasio, A. (1995). Descartes' Error: Emotion, Reason, and the Human Brain. New York: Penguin Books.
Dastin, J. (2018, October 9). Amazon scraps secret AI recruiting tool that showed bias against women. Accessed November 21, 2019, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
Dunbar, R. I. M., & Schultz, S. (2007). Evolution in the social brain. Science, 317, 1344–1347. https://doi.org/10.1126/science.1145463.
Eisenberger, N. I. (2013). Social ties and health: a social neuroscience perspective. Current Opinions in Neurobiology, 23(3), 407–413. https://doi.org/10.1016/j.conb.2013.01.006.
Evans, J. S. B. T. (2010). Intuition and reasoning: a dual-process perspective. Psychological Inquiry, 21(4), 313–326. https://doi.org/10.1080/104780X.2010.521057.
Falk, E. B., & Bassett, D. B. (2017). Brain and social networks: fundamental building blocks of human experience. Trends in Cognitive Sciences, 21(9), 674–690. https://doi.org/10.1016/j.tics.2017.06.009.
Falk, E., & Scholz, C. (2018). Persuasion, influence, and value: perspectives from communication and social neuroscience. Annual Review of Psychology. https://doi.org/10.1146/annurev-psych-122216.011821.
Fauconnier, G., & Turner, M. (2002). The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. New York: Basic Books.
Franklin, S., Madl, T., D'Mello, S., & Snaider, J. (2014). LIDA: A systems-level architecture for cognition, emotion, and learning. IEEE Transactions on Autonomous Mental Development, 6(1), 19–41. https://doi.org/10.1109/TAMD.2013.2277589.
Frith, C. D. (2007). The social brain? Philosophical Transactions of the Royal Society B, 362, 671–678. https://doi.org/10.1098/rstb.2006.2003.
Garrod, S., & Pickering, M. J. (2004). Why is conversation so easy? Trends in Cognitive Sciences, 8, 8–11.
Gilovich, T., Griffin, D., & Kahneman, D. (2002). Heuristics and Biases: the Psychology of Intuitive Judgment. New York: Cambridge University Press.
Goel, V. (2007). Anatomy of deductive reasoning. Trends in Cognitive Science, 11(10), 435–441. https://doi.org/10.1016/j.tics.2007.09.003.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.
Hari, R., Henriksson, L., Malinen, S., & Parkkonen, L. (2015). Centrality of social interaction in human brain function. Neuron, 88(1), 181–193. https://doi.org/10.1016/j.neuron.2015.09.022.
Homan, R. W. (2003). Autonomy reconfigured: incorporating the role of the unconscious. Perspectives in Biology and Medicine, 46(1), 96–108.
Hurlemann, R., Patin, A., Onur, O. A., Cohen, M. X., Baumgartner, T., Metzler, S., et al. (2010). Oxytocin enhances amygdala-dependent, socially reinforced learning and emotional empathy in humans. The Journal of Neuroscience, 30(14), 4999–5007. https://doi.org/10.1523/JNEUROSCI.5538-09.2010.
Isen, A. M., & Diamond, G. A. (1989). Affect and automaticity. In J. S. Uleman & J. A. Bargh (Eds.), Unintended Thought (pp. 124–52). New York: Guilford Press.
Janis, I. L. (1982). Groupthink: A Psychological Study of Policy Decisions and Fiascos. Boston: Houghton Mifflin Company.
Jiji. (2018, November 15). Over 80% of Japanese positive about robotic nursing care. Accessed November 23, 2019, https://www.japantimes.co.jp/news/2018/11/15/national/80-japanese-positive-robotic-nursing-care/#.XebK-OhKiM8.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Lewis, M., Denis Y., Yann N. D., Devi, P., & Dhruv, B. (2017, June 14). Deal or no deal? Training AI bots to negotiate. Accessed November 23, 2019, https://engineering.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/.
Lloyd, K. (2018, September 20). Bias amplification in artificial intelligence systems. Accessed November 23, 2019, https://arxiv.org/abs/1809.07842.
Logan, G. D. (1989). Automaticity and cognitive control. In S. J. UIeman & J. A. Bargh (Eds.), Unintended Thought (pp. 52–74). New York: Guilford Press.
Mitchell, G. (2014). Use of doll therapy for people with dementia: an overview. Nursing Older People, 26(4), 24–26. https://doi.org/10.7748/nop2014.04.26.4.24.e568.
Prinz, J. (2015). Is the moral brain ever dispassionate? In J. Decety & T. Wheatley (Eds.), The Moral Brain: A Multidisciplinary Perspective (pp. 51–67). Cambridge, MA: The MIT Press.
Saxe, R. (2006). Uniquely human social cognition. Current Opinion in Neurobiology, 16, 235–239. https://doi.org/10.1016/j.conb.2006.03.001.
Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schilcht, T., et al. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36, 393–462. https://doi.org/10.1017/S0140525X12000660.
Sharkey, A., & Sharkey, N. (2011). Children, the elderly, and interactive robots. IEEE Robotics and Automation Magazine, 18(1), 32–38. https://doi.org/10.1109/MRA.2010.940151.
Sharkey, A., & Sharkey, N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27–40. https://doi.org/10.1007/s10676-010-9234-6.
Smith, A. & Monica, A. (2017). 4. Americans’ attitudes toward robot caregivers. Accessed November 23, 2019, https://www.pewresearch.org/internet/2017/10/04/americans-attitudes-toward-robot-caregivers/.
Smith, E. R. (1997). Preconscious automaticity in a modular connectionist system. In R. S. Wyer (Ed.), The Automaticity of Everyday Life (pp. 187–202). Mahwah: Lawrence Erlbaum Associates.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645–665. https://doi.org/10.1017/S0140525X00003435.
Sweeney, L. (2013). Discrimination in online Ad delivery. Queue, 10(3), 10 (20 pages). https://doi.org/10.1145/2460276.2460278.
Turner, M. (2000). Backstage cognition in reason and choice. In Elements of Reason: Cognition, Choice, and the Bounds of Rationality (pp.264–286). New York: Cambridge University Press.
Uleman, J. S., & Bargh, J. A. (1989). Unintended Thought. New York: Guilford Press.
Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press.
Whitby, B. (2008). Computing machinery and morality. AI & Society, 22, 551–563. https://doi.org/10.1007/s00146-007-0100-y.
Wyer, R. S. (Ed.). (1997). The Automaticity of Everyday Life. Mahwah: Lawrence Erlbaum Associates.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Commentary for “Towards establishing criteria for the ethical analysis of artificial intelligence.” by Michele Farisco et al.
Rights and permissions
About this article
Cite this article
Butkus, M.A. The Human Side of Artificial Intelligence. Sci Eng Ethics 26, 2427–2437 (2020). https://doi.org/10.1007/s11948-020-00239-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11948-020-00239-9