Advertisement

Ethics and Information Technology

, Volume 21, Issue 1, pp 19–28 | Cite as

The value alignment problem: a geometric approach

  • Martin PetersonEmail author
Original Paper

Abstract

Stuart Russell defines the value alignment problem as follows: How can we build autonomous systems with values that “are aligned with those of the human race”? In this article I outline some distinctions that are useful for understanding the value alignment problem and then propose a solution: I argue that the methods currently applied by computer scientists for embedding moral values in autonomous systems can be improved by representing moral principles as conceptual spaces, i.e. as Voronoi tessellations of morally similar choice situations located in a multidimensional geometric space. The advantage of my preferred geometric approach is that it can be implemented without specifying any utility function ex ante.

Keywords

Value alignment problem Autonomous systems Conceptual spaces Self-driving cars Stuart Russell IEEE 

References

  1. Anderson, M., & Anderson, S. L. (2014). GenEth: A general ethical dilemma analyzer.” Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (2014): 253–261.Google Scholar
  2. Attfield, R. (2014). Environmental ethics: An overview for the twenty-first century. New York: Wiley.Google Scholar
  3. Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.Google Scholar
  4. Brown, C. (2011). Consequentialize this. Ethics, 121(4), 749–771.CrossRefGoogle Scholar
  5. Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625).Google Scholar
  6. Dafoe, A., & Russell, S. (2016). Yes, we are worried about the existential risk of artificial intelligence. MIT Technology Review.Google Scholar
  7. Gärdenfors, P. (2000). Conceptual spaces: The geometry of thought. Cambridge: MIT Press.CrossRefGoogle Scholar
  8. Gärdenfors, P. (2014). The geometry of meaning: Semantics based on conceptual spaces. Cambridge: MIT Press.zbMATHGoogle Scholar
  9. Goodall, N. J. (2016). Can you program ethics into a self-driving car? IEEE Spectrum, 53(6), 28–58.CrossRefGoogle Scholar
  10. Guardian Staff and Agencies, (2018). Tesla car that crashed and killed driver was running on Autopilot, firm says. The Guardian, March 31st, 2018.Google Scholar
  11. Hadfield-Menell, D., Dragan, A., Abbeel, P., & Russell, S. (2016). “The off-switch game”, arXiv preprint arXiv: 1611.08219.Google Scholar
  12. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2017a). “Ethically Aligned Design (EAD) - Version 2.” Retrieved January 26, 2018, from http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.
  13. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2017b). “Classical Ethics in A/IS” Retrieved January 26, 2018, from https://standards.ieee.org/develop/indconn/ec/ead_classical_ethics_ais_v2.pdf.
  14. Jonsen, A. R., & Toulmin, S. E. (1988). The abuse of casuistry: A history of moral reasoning. University of California Press.Google Scholar
  15. Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling. New York: Sage Publications.CrossRefGoogle Scholar
  16. Lokhorst, G. J. C. (2018). Science and Engineering Ethics.“, 415–417.  https://doi.org/10.1007/s11948-017-0014-0.
  17. Milli, S., Hadfield-Menell, D., Dragan, A., & Russell, S. (2017). “Should Robots be Obedient?”. arXiv preprint arXiv.1705.09990.Google Scholar
  18. Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275–1289.CrossRefGoogle Scholar
  19. Paulo, N. (2015). Casuistry as common law morality. Theoretical Medicine and Bioethics, 36(6), 373–389.CrossRefGoogle Scholar
  20. Peterson, M. (2013). The dimensions of consequentialism: Ethics, equality and risk. Cambridge University Press.Google Scholar
  21. Peterson, M. (2017). The ethics of technology: A geometric analysis of five moral principles. Oxford: Oxford University Press.CrossRefGoogle Scholar
  22. Peterson, M. (2018). The ethics of technology: Response to critics. Science and Engineering Ethics.  https://doi.org/10.1007/s119.Google Scholar
  23. Rosch, E. (1975). Cognitive reference points. Cognitive Psychology, 7, 532–547.CrossRefGoogle Scholar
  24. Rosch, E. H. (1973). Natural categories. Cognitive Psychology, 4, 328–350.CrossRefGoogle Scholar
  25. Russell, S. (2016). Should we fear supersmart robots. Scientific American, 314(6), 58–59.CrossRefGoogle Scholar
  26. Shrader-Frechette, K. (2017). Review of the ethics of technology: A geometric analysis of five moral principles. Notre Dame Philosophical Reviews. University of Notre Dame. Retrieved November 11 2017 from. http://ndpr.nd.edu/news/the-ethics-of-technology-a-geometric-analysis-of-five-moral-principles/.
  27. Stewart, A., Prandy, K., & Blackburn, R. M. (1973) Measuring the class structure. Nature, 245, 415.CrossRefGoogle Scholar
  28. Taylor, M. (2016). Self-driving Mercedes-Benzes will prioritize occupant safety over pedestrians, Retrieved January 26, 2018, from https://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians.

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.Department of PhilosophyTexas A&M UniversityCollege StationUSA

Personalised recommendations