A Rawlsian algorithm for autonomous vehicles

Abstract

Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the Maximin procedure is what self-interested agents would use from an original position, and then show how the Maximin procedure can be operationalized to produce unique outputs over probabilities of survival.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    More generally, the dilemma occurs whenever every action available to the vehicle will result in some amount of expected harm, whether this is from collisions with other vehicles, motorcycles, bicyclists, or pedestrians.

  2. 2.

    The SH game comes from a story told by Jean-Jacques Rousseau about two hunters who could decide to either cooperate and hunt stag for a larger mutual payoff, or defect and decide to hunt hare for a lesser but still acceptable dinner (Skyrms 2003). The problem is that catching a stag requires two hunters, and so cooperating still makes the cooperator vulnerable. However, in this case (as opposed to PD), the other player doesn’t have as much incentive to cheat, since a rabbit dinner could just as well be obtained from both players defecting.

  3. 3.

    I am here ignoring the differences between Utilitarian procedures that sum the total and those that take an average (or weighted average). There are many sophisticated versions of the Utilitarian calculation, but I will only consider the most basic form here.

  4. 4.

    This is not the response that Rawls would make, since he advocates a reflective equilibrium between our intuitions and our moral theories.

References

  1. Anderson, M., Anderson, S. L., & Armen, C. (2004). Towards machine ethics. AAAI-04 Workship on Agent Orientations: theory and practice.

  2. Anderson, M., Anderson, S., & Leigh, S. (2011). Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  3. Anderson, S. L., & Anderson, M. (2011). A prima facie duty approach to machine ethics and its application to elder care. Human-Robot Interaction in Elder Care: Papers from the 2011 AAAI Workshop (WS-11-12).

  4. Binmore, K. (2005). Natural justice. Oxford: Oxford University Press.

    Google Scholar 

  5. Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars? Science, 352, 1573–1576.

    Article  Google Scholar 

  6. Gauthier, D. (1986). Morals by agreement. Oxford: Oxford University Press.

    Google Scholar 

  7. Harsanyi, J. (1975). Can the maximin princple serve as a basis for morality? A critique of john rawls’ theory. The American Political Science Review, 69, 594–606.

    Article  Google Scholar 

  8. Hobbes, T. (1651). Leviathan. New York: Penguin Books.

    Google Scholar 

  9. Lin, P. (2011). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.

    Google Scholar 

  10. Nord, E. (1999). Cost-Value Analysis in Health Care. Cambridge, MA: Cambridge University Press.

    Google Scholar 

  11. Powers, T. (2006). Prospects for a kantian machine. IEEE Intelligent Systems, 21, 46–51.

    Article  Google Scholar 

  12. Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

    Google Scholar 

  13. Sassi, F. (2006). A prima facie duty approach to machine ethics and its application to elder care. Health Policy and Planning, 21, 402–408.

    Article  Google Scholar 

  14. Skyrms, B. (2003). The stag hunt and the evolution of social structure. Cambridge: Cambridge University Press.

    Google Scholar 

  15. Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Derek Leben.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Leben, D. A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19, 107–115 (2017). https://doi.org/10.1007/s10676-017-9419-3

Download citation

Keywords

  • Autonomous vehicles
  • Ethics
  • Rawls
  • Trolley problem