Skip to main content

Advertisement

Log in

The Human Side of Artificial Intelligence

  • Commentary
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Artificial moral agents raise complex ethical questions both in terms of the potential decisions they may make as well as the inputs that create their cognitive architecture. There are multiple differences between human and artificial cognition which create potential barriers for artificial moral agency, at least as understood anthropocentrically and it is unclear that artificial moral agents should emulate human cognition and decision-making. It is conceptually possible for artificial moral agency to emerge that reflects alternative ethical methodologies without creating ontological challenges or existential crises for human moral agents.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Needless to say, what constitutes “normal” versus “abnormal” reasoning is another contentious topic. For the sake of brevity, I understand the terms here in light of endpoints of a continuum of behaviors, with “abnormal” including behaviors that by their nature prevent the agent from meeting their intended goals. Just as the authors note that researchers like Turing point to manifestations of intelligent action rather than attempt to define it, I will refer to manifestations of “abnormal” cognitive processing. As an example, I worked with a patient whose spatial perception prevented him from being able to navigate hallways (he would walk into corners and injure himself) or feed himself (he would hold his sandwich a foot away from his mouth and attempt to take bites). He clearly was attempting to interact with his environment, but the results of his reasoning process prevented him from being to attain those goals.

  2. For instance, in practicing ethics in a natural rather than controlled environment, it would seem reasonable to require an ethical agent to be able to identify the central ethical dilemma (rather than ancillary issues), identify which agents are both affected and relevant, what personal or professional obligations might exist, which ethical methodology (or methodologies) are appropriate in approaching the problem, if there are any implicit prohibitions on particular actions (like murder), what is the context of the action considered, what are the agent’s intentions, and what are the likely consequences. Each of these presents unique coding challenges or could be led astray in self-learning systems (just like in human agents).

  3. I draw a distinction between willingness to receive care from a robot caregiver from forming a bond with a caregiver.

  4. Interestingly, interactions between certain elderly populations and inanimate dolls suggests that some degree of bonding might be possible but this intervention is controversial at best (Mitchell 2014; Sharkey & Sharkey 2012).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew A. Butkus.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Commentary for “Towards establishing criteria for the ethical analysis of artificial intelligence.” by Michele Farisco et al.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Butkus, M.A. The Human Side of Artificial Intelligence. Sci Eng Ethics 26, 2427–2437 (2020). https://doi.org/10.1007/s11948-020-00239-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-020-00239-9

Keywords

Navigation