Skip to main content
Log in

Landscape of Machine Implemented Ethics

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating ‘whetstones’ and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. A Markov Decision Process is a mathematical framework for modelling partially random processes. It allows us to model the possible future states of an agent, given its current state and the probabilities of possible successor states.

  2. https://www.softbankrobotics.com/emea/en/nao.

References

  • Abel, D., MacGlashan, J., & Littman, M. L. (2016). Reinforcement learning as a framework for ethical decision making. In B. Bonet, et al. (Eds.), AAAI Workshop: AI, Ethics, and Society (pp. 54–61). AAAI Workshops: AAAI Press.

    Google Scholar 

  • Anderson, M., & Anderson, S. L. (2007). Machine ethics: creating an ethical intelligent agent. AI Mag, 28(4), 15–26.

    Google Scholar 

  • Anderson, M., Anderson, S.L. & Armen, C. (2006). MedEthEx: a prototype medical ethics advisor. In Proceedings Of The National Conference On Artificial Intelligence. MIT Press, pp. 1759–1765.

  • Anderson, M., Anderson, S. L., & Berenz, V. (2019). A value-driven eldercare robot: virtual and physical instantiations of a case-supported principle-based behavior paradigm. Proceedings of the IEEE, 107(3), 526–540.

    Article  Google Scholar 

  • Anderson, S. L. (2011). The unacceptability of Asimov's three laws of robotics as a basis for machine ethics. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 285–296). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Arkin, R.C. (2008). Governing lethal behavior. In Proceedings of the 3rd international conference on Human robot interaction,ACM Press, pp 121–128.

  • Armstrong, S. (2015). Motivated value selection for artificial agents. In AAAI Workshop: AI and Ethics. pp. 12–20.

  • Asimov, I. (1950). I, Robot. Gnome Press.

  • Beauchamp, T. L., & Childress, J. F. (1991). Principles of biomedical ethics. Ann Int Med, 114(9), 827.

    Google Scholar 

  • Berreby, F., Bourgne, G. & Ganascia, J.-G. (2018). Event-based and scenario-based causality for computational ethics. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS ’18. pp. 147–155.

  • Bjorgen, E. et al. (2018). Cake, death, and trolleys: dilemmas as benchmarks of ethical decision-making. In AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, pp. 23–29.

  • Bogosian, K. (2017). Implementation of moral uncertainty in intelligent machines. Minds Mach, 27(4), 591–608.

    Article  Google Scholar 

  • Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.

    Article  Google Scholar 

  • Briggs, G. & Scheutz, M. (2015). Sorry, I can’t do that: Developing mechanisms to appropriately reject directives in human-robot interactions. In AAAI Fall Symposium Series. pp. 32–36.

  • Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.

    Article  Google Scholar 

  • Cointe, N., Bonnet, G. & Boissier, O., 2016. Ethical Judgment of Agents’ Behaviors in Multi-Agent Systems. In Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems. AAMAS ’16. Singapore, pp. 1106–1114.

  • Dennis, L., et al. (2016). Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems, 77, 1–14.

    Article  Google Scholar 

  • Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, (5).

  • Kittock, J.E. (1993). Emergent conventions and the structure of multi-agent systems. In Proceedings of the 1993 Santa Fe Institute Complex Systems Summer School. pp. 1–14.

  • Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Ashgate Publishing, Ltd. ISBN 0754677265.

  • Lazar, S. (2017). War. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Metaphysics Research Lab: Stanford University.

    Google Scholar 

  • Lewis, P.R., Goldingay, H. & Nallur, V. (2014). It’s Good to Be Different: Diversity, Heterogeneity, and Dynamics in Collective Systems. In Self-Adaptive and Self-Organizing Systems Workshops (SASOW). IEEE, pp. 84–89.

  • Lindner, F., Bentzen, M.M. & Nebel, B. (2017b). The HERA approach to morally competent robots. In 2017 International Conference on Intelligent Robots and Systems (IROS), pp. 6991–6997.

  • Lynn, L. A. (2019). Artificial intelligence systems for complex decision-making in acute care medicine: a review. Pat Saf Surg, 13(1), 6.

    Article  Google Scholar 

  • MacAskill, W. (2016). Normative uncertainty as a voting problem. Mind, 125(500), 967–1004.

    Article  Google Scholar 

  • Mackworth, A. K. (2011). Architectures and ethics for robots. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 335–360). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Marques, H. G., & Holland, O. (2009). Architectures for functional imagination. Neurocomputing, 72(4–6), 743–759.

    Article  Google Scholar 

  • Masoum, A. S., et al. (2011). Smart load management of plug-in electric vehicles in distribution and residential networks with charging stations for peak shaving and loss minimisation considering voltage regulation. IET Gener Trans Distrib, 5(8), 877–888.

    Article  Google Scholar 

  • Moyle, W. (2017). Social robotics in dementia care. In B. A. Wilson, et al. (Eds.), Neuropsychological rehabilitation: the international handbook; Neuropsychological rehabilitation: The international handbook (pp. 458–466). New York: Routledge/Taylor & Francis Group.

    Google Scholar 

  • Mundhenk, M., et al. (2000). Complexity of finite-horizon Markov decision process problems. Journal of the ACM, 47(4), 681–720.

    Article  Google Scholar 

  • Nallur, V., & Clarke, S. (2018). Clonal plasticity: an autonomic mechanism for multi-agent systems to self-diversify. Auton Agents Multi-Agent Syst, 32(2), 275–311.

    Article  Google Scholar 

  • Ross, W. D. (1987). Prima Facie duties. In Gowans, C. (Ed.), Moral dilemmas. Oxford University Press.

  • Serramia, M. et al. (2018). Exploiting Moral Values to Choose the Right Norms. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. ACM Press, pp. 264–270.

  • Sharkey, A., & Sharkey, N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Eth Inf Technol, 14(1), 27–40.

    Article  Google Scholar 

  • Shim, J., & Arkin, R. C. (2017). An intervening ethical governor for a robot mediator in patient-caregiver relationships. In A World with Robots (pp. 77–91). Springer.

  • Song, H. et al. (2015). On architectural diversity of dynamic adaptive systems. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. IEEE, pp. 595–598.

  • Vanderelst, D., & Winfield, A. (2018). An architecture for ethical robots inspired by the simulation theory of cognition. Cognit Syst Res, 48, 56–66.

    Article  Google Scholar 

  • Yoon, J. H., Baldick, R., & Novoselac, A. (2014). Dynamic demand response controller based on real-time retail price for residential buildings. IEEE Trans Smart Grid, 5(1), 121–129.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vivek Nallur.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

• (Anderson et al. 2019) https://www.researchgate.net/publication/333999191_GenEth_Distributionzip

• (Vanderelst & Winfield 2018)—Not Found At Time of Writing

• (Berreby et al. 2018) https://github.com/FBerreby/Aamas2018

• (Lindner et al. 2017) https://www.hera-project.com/software/

• (Cointe et al. 2016)—Not Found At Time of Writing

• (Abel et al. 2016) https://github.com/david-abel/ethical_dilemmas

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nallur, V. Landscape of Machine Implemented Ethics. Sci Eng Ethics 26, 2381–2399 (2020). https://doi.org/10.1007/s11948-020-00236-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-020-00236-y

Keywords

Navigation