Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the Maximin procedure is what self-interested agents would use from an original position, and then show how the Maximin procedure can be operationalized to produce unique outputs over probabilities of survival.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
More generally, the dilemma occurs whenever every action available to the vehicle will result in some amount of expected harm, whether this is from collisions with other vehicles, motorcycles, bicyclists, or pedestrians.
The SH game comes from a story told by Jean-Jacques Rousseau about two hunters who could decide to either cooperate and hunt stag for a larger mutual payoff, or defect and decide to hunt hare for a lesser but still acceptable dinner (Skyrms 2003). The problem is that catching a stag requires two hunters, and so cooperating still makes the cooperator vulnerable. However, in this case (as opposed to PD), the other player doesn’t have as much incentive to cheat, since a rabbit dinner could just as well be obtained from both players defecting.
I am here ignoring the differences between Utilitarian procedures that sum the total and those that take an average (or weighted average). There are many sophisticated versions of the Utilitarian calculation, but I will only consider the most basic form here.
This is not the response that Rawls would make, since he advocates a reflective equilibrium between our intuitions and our moral theories.
Anderson, M., Anderson, S. L., & Armen, C. (2004). Towards machine ethics. AAAI-04 Workship on Agent Orientations: theory and practice.
Anderson, M., Anderson, S., & Leigh, S. (2011). Machine ethics. Cambridge: Cambridge University Press.
Anderson, S. L., & Anderson, M. (2011). A prima facie duty approach to machine ethics and its application to elder care. Human-Robot Interaction in Elder Care: Papers from the 2011 AAAI Workshop (WS-11-12).
Binmore, K. (2005). Natural justice. Oxford: Oxford University Press.
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). Autonomous vehicles need experimental ethics: Are we ready for utilitarian cars? Science, 352, 1573–1576.
Gauthier, D. (1986). Morals by agreement. Oxford: Oxford University Press.
Harsanyi, J. (1975). Can the maximin princple serve as a basis for morality? A critique of john rawls’ theory. The American Political Science Review, 69, 594–606.
Hobbes, T. (1651). Leviathan. New York: Penguin Books.
Lin, P. (2011). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.
Nord, E. (1999). Cost-Value Analysis in Health Care. Cambridge, MA: Cambridge University Press.
Powers, T. (2006). Prospects for a kantian machine. IEEE Intelligent Systems, 21, 46–51.
Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.
Sassi, F. (2006). A prima facie duty approach to machine ethics and its application to elder care. Health Policy and Planning, 21, 402–408.
Skyrms, B. (2003). The stag hunt and the evolution of social structure. Cambridge: Cambridge University Press.
Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
About this article
Cite this article
Leben, D. A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19, 107–115 (2017). https://doi.org/10.1007/s10676-017-9419-3
- Autonomous vehicles
- Trolley problem