Skip to main content
Log in

A Rawlsian algorithm for autonomous vehicles

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the Maximin procedure is what self-interested agents would use from an original position, and then show how the Maximin procedure can be operationalized to produce unique outputs over probabilities of survival.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. More generally, the dilemma occurs whenever every action available to the vehicle will result in some amount of expected harm, whether this is from collisions with other vehicles, motorcycles, bicyclists, or pedestrians.

  2. The SH game comes from a story told by Jean-Jacques Rousseau about two hunters who could decide to either cooperate and hunt stag for a larger mutual payoff, or defect and decide to hunt hare for a lesser but still acceptable dinner (Skyrms 2003). The problem is that catching a stag requires two hunters, and so cooperating still makes the cooperator vulnerable. However, in this case (as opposed to PD), the other player doesn’t have as much incentive to cheat, since a rabbit dinner could just as well be obtained from both players defecting.

  3. I am here ignoring the differences between Utilitarian procedures that sum the total and those that take an average (or weighted average). There are many sophisticated versions of the Utilitarian calculation, but I will only consider the most basic form here.

  4. This is not the response that Rawls would make, since he advocates a reflective equilibrium between our intuitions and our moral theories.

References

  • Anderson, M., Anderson, S. L., & Armen, C. (2004). Towards machine ethics. AAAI-04 Workship on Agent Orientations: theory and practice.

  • Anderson, M., Anderson, S., & Leigh, S. (2011). Machine ethics. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Anderson, S. L., & Anderson, M. (2011). A prima facie duty approach to machine ethics and its application to elder care. Human-Robot Interaction in Elder Care: Papers from the 2011 AAAI Workshop (WS-11-12).

  • Binmore, K. (2005). Natural justice. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars? Science, 352, 1573–1576.

    Article  Google Scholar 

  • Gauthier, D. (1986). Morals by agreement. Oxford: Oxford University Press.

    Google Scholar 

  • Harsanyi, J. (1975). Can the maximin princple serve as a basis for morality? A critique of john rawls’ theory. The American Political Science Review, 69, 594–606.

    Article  Google Scholar 

  • Hobbes, T. (1651). Leviathan. New York: Penguin Books.

    Google Scholar 

  • Lin, P. (2011). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Nord, E. (1999). Cost-Value Analysis in Health Care. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Powers, T. (2006). Prospects for a kantian machine. IEEE Intelligent Systems, 21, 46–51.

    Article  Google Scholar 

  • Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Sassi, F. (2006). A prima facie duty approach to machine ethics and its application to elder care. Health Policy and Planning, 21, 402–408.

    Article  Google Scholar 

  • Skyrms, B. (2003). The stag hunt and the evolution of social structure. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Derek Leben.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Leben, D. A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19, 107–115 (2017). https://doi.org/10.1007/s10676-017-9419-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-017-9419-3

Keywords

Navigation