Skip to main content

Advertisement

Log in

Extended norms: locating accountable decision-making in contexts of human-robot interaction

  • Hauptbeiträge - Thementeil
  • Published:
Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO) Aims and scope Submit manuscript

Abstract

Machine ethics has sought to establish how autonomous systems could make ethically appropriate decisions in the world. While mere statistical machine learning approaches have focused on learning human preferences from observations and attempted actions, hybrid approaches to machine ethics attempt to provide more explicit guidance for robots based on explicit norm representations. Neither approach, however, might be sufficient for real contexts of human-robot interaction, where reasoning and exchange of information may need to be distributed across automated processes and human improvisation, requiring real-time coordination within a dynamic environment (sharing information, trusting in other agents, and arriving at revised plans together). This paper builds on discussions of “extended minds” in philosophy to examine norms as “extended” systems supported by external cues and an agent’s own applications of norms in concrete contexts. Instead of locating norms solely as discrete representations within the AI system, we argue that explicit normative guidance must be extended across human-machine collaborative activity as what does and does not constitute a normative context, and within a norm, might require negotiation of incompletely specified or derive principles that not be self-contained, but become accessible as a result of the agent’s actions and interactions and thus representable by agents in social space.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Arnold, T., & Scheutz, M. (2018). The ?big red button? is too late: an alternative model for the ethical evaluation of ai systems. Ethics and Information Technology, 20(1), 59–69.

    Article  Google Scholar 

  • Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.

    Article  Google Scholar 

  • Clark, A. (2001). Reasons, robots and the extended mind. Mind & Language, 16(2), 121–145.

    Article  Google Scholar 

  • Clark, A., & Chalmers, D. (1998). The extended mind. analysis, 58(1), 7–19.

    Article  Google Scholar 

  • Dragan, A. D., Lee, K. C., & Srinivasa, S. S. (2013). Legibility and predictability of robot motion. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 301–308). IEEE.

    Chapter  Google Scholar 

  • Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In Early engagement and new technologies: opening up the laboratory (pp. 55–95). Springer.

    Chapter  Google Scholar 

  • Kandefer, M., & Shapiro, S. C. (2008). A categorization of contextual constraints. In AAAI Fall Symposium: Biologically Inspired Cognitive Architectures (pp. 88–93).

    Google Scholar 

  • Legros, S., & Cislaghi, B. (2020). Mapping the social-norms literature: an overview of reviews. Perspectives on Psychological Science, 15(1), 62–80.

    Article  PubMed  Google Scholar 

  • Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. In Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology (p. 8). IEEE Press.

    Google Scholar 

  • Malle, B. F., & Scheutz, M. (2020). Moral competence in social robots. In Machine ethics and robot ethics (pp. 225–230). Routledge.

    Chapter  Google Scholar 

  • Meyer, S., Mandl, S., Gesmann-Nuissl, D., & Strobel, A. (2022). Responsibility in hybrid societies: concepts and terms. AI and Ethics. https://doi.org/10.1007/s43681-022-00184-2.

    Article  Google Scholar 

  • Riek, L. D., & Robinson, P. (2011). Challenges and opportunities in building socially intelligent machines [social sciences]. IEEE Signal Processing Magazine, 28(3), 146–149.

    Article  Google Scholar 

  • Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016). Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 101–108). IEEE.

    Chapter  Google Scholar 

  • Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. Ai Magazine, 36(4), 105–114.

    Article  Google Scholar 

  • Sarathy, V., Arnold, T., & Scheutz, M. (2019). When exceptions are the norm: exploring the role of consent in hri. ACM Transactions on Human-Robot Interaction (THRI), 8(3), 1–21.

    Article  Google Scholar 

  • Toh, C. K., Sanguesa, J. A., Cano, J. C., & Martinez, F. J. (2020). Advances in smart roads for future smart cities. Proceedings of the Royal Society A, 476(2233), 20190439.

    Article  Google Scholar 

  • Tolmeijer, S., Kneer, M., Sarasua, C., Christen, M., & Bernstein, A. (2020). Implementations in machine ethics: a survey. ACM Computing Surveys (CSUR), 53(6), 1–38.

    Article  Google Scholar 

  • Van Wynsberghe, A. (2013). Designing robots for care: care centered value-sensitive design. Science and engineering ethics, 19(2), 407–433.

    Article  PubMed  Google Scholar 

  • Van Wynsberghe, A. (2020). Designing robots for care: care centered value-sensitive design. In Machine ethics and robot ethics (pp. 185–211). Routledge.

    Chapter  Google Scholar 

  • Van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and engineering ethics, 25(3), 719–735.

    Article  PubMed  Google Scholar 

  • Vanderelst, D., & Winfield, A. (2018). The dark side of ethical robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 317–322).

    Chapter  Google Scholar 

  • Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016). Moral judgments of human vs. robot agents. In 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 775–780). IEEE.

    Chapter  Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: teaching robots right from wrong. Oxford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Arnold.

Rights and permissions

Springer Nature oder sein Lizenzgeber hält die ausschließlichen Nutzungsrechte an diesem Artikel kraft eines Verlagsvertrags mit dem/den Autor*in(nen) oder anderen Rechteinhaber*in(nen); die Selbstarchivierung der akzeptierten Manuskriptversion dieses Artikels durch Autor*in(nen) unterliegt ausschließlich den Bedingungen dieses Verlagsvertrags und dem geltenden Recht.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arnold, T., Scheutz, M. Extended norms: locating accountable decision-making in contexts of human-robot interaction. Gr Interakt Org 53, 359–366 (2022). https://doi.org/10.1007/s11612-022-00645-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11612-022-00645-6

Keywords

Navigation