Skip to main content
Log in

Autonomous Driving and Perverse Incentives

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of unintended side effect that must be balanced against the main goal or value to be realized by an action, technology, or policy. Instead, they directly cause the primary goals of the actors—i.e., the goals that they ultimately pursue with the action, technology, or policy—to be “worse achieved” (Parfit). In this paper, we elaborate on this definition and distinguish three ideal-typical phases of adverse incentives, where only in the last one the threshold for a perverse incentive is crossed. In addition, we discuss different possible relevant actors and their goals in implementing autonomous vehicles. We conclude that even if some actors do not pursue traffic safety as their primary goal, as part of a responsibility network they incur the responsibility to act on the common primary goal of the network, which we argue to be traffic safety.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We refer to partially autonomous vehicles as vehicles with an SAE automation level up to 4 (SAE International 2016). Fully autonomous driving vehicles, on the other hand, are equated with an SAE automation level of 5.

  2. This is confident information from personal interviews. The sources have first-hand evidence of these mechanisms, but do not want their names to be disclosed. They are known to the authors. There exist no studies to date that systematically explore this particular phenomenon, although a variety of human factor surveys have been conducted so far (Tennant et al. 2016; Kyriakidis et al. 2017). In this paper, we are mainly interested in the possibility of such incentives and its ethical implications, and only secondarily concerned with the prevalence of the scenario.

  3. Borenstein et al. (2017) list five different aspects that may contribute to over-trust: individual psychological dispositions, cultural peculiarities, positivity biases, the design and behavior of an autonomous system, and their consistency and predictability.

  4. To our knowledge, there exists no specific philosophical literature on perverse incentives—with the exception of Parfit’s discussion of “directly self-defeating moral theories” (Parfit 1984, Part 1, esp. Sect. 2), which we will discuss shortly. We therefore draw on general descriptions of perverse incentives in the literature on risk assessment.

  5. For better readability, in the following, we refer to “technology, action, or social policy” as “TAP,” while “implementing a technology, carrying out an action, or introducing a social policy” will be shortened to “actualizing a TAP.”

  6. We are grateful to Hauke Behrendt for highlighting this point.

  7. As we have said before, the primary goal does not have to be a goal for the participant adopting the detrimental—or even negating—behavior. Rather, it is prima facie only a goal for the actor actualizing the TAP. Therefore, it is per se not be irrational for the participant to adopt this kind of behavior.

  8. While “phases” refer to a temporal succession, we do not claim that there is necessarily a path dependency in the sense that when phase one is initiated, phase three has to be reached eventually. Rather, we want to shed light on the fact that a perverse incentive oftentimes is a gradual as well as incremental phenomenon, which, when closely monitored, can be prevented in its early stages.

  9. In this paper, we cannot engage in detail the objection that by introducing AVs we are “trading” lives, as “the identities of many (future) fatality victims would change” (Lin 2013). This might be impermissible, since by reducing the number of traffic fatalities we might be disregarding the fact that “suffering is not additive in this way” (Taurek 1977, p. 308). By way of a short answer, we agree with Patrick Lin that traffic safety is a statistical goal that is operationalized in accidents per driven road mile (NHTSA 2014). This means that there is no actual life at stake but rather the risk of losing one’s life in an accident. But, as Lin points out, no one has the moral right “not to be an accident victim” (Lin 2013).

  10. Note that we are employing here a form of reason subjectivism. We are aware that Parfit himself subscribes to a form of objectivism with regard to reasons, what he calls “Reasons Fundamentalism.” For the notion of perverse incentives, however, it is important that by implementing a technology, the implementing actor negates her own conclusive reasons for implementing it, not what she has “most reason to do” (Scanlon 1998, pp. 33–34) from an objective standpoint.

  11. Motorcycles present a special case, as the harm from traffic accidents mainly falls on the bikers themselves. Since liberal societies typically try to refrain from acting paternalistic, the rationale behind not banning motorcycles from the road can be tied to the fact that they on average harm themselves. However, this aspect does not touch on our general point here.

  12. This seems to be the case, albeit not with regard to perverse incentives, when it comes to speed limits on the German Autobahn.

  13. From the supply side, however, AVs will most likely also be marketed as increasing overall traffic safety, as, e.g., Tesla’s autopilot function is.

  14. In addition, the idea of such networks may mitigate or even solve the “problem of many hands” (van de Poel et al. 2011), i.e., the problem of a failure to distribute responsibility adequately within a collective agent.

  15. “Each of us” in this context refers to each instantiation of the kind of responsibility network that we have described here.

References

  • Aven, T. (2016). (2016). Risk assessment and risk management: Review of recent advances on their foundation. European Journal of Operational Research, 253, 1–13. https://doi.org/10.1016/j.ejor.2015.12.023.

    Article  Google Scholar 

  • Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352, 1573–1576. https://doi.org/10.1126/science.aaf2654.

    Article  Google Scholar 

  • Borenstein, J., Howard, A., & Wagner, A. (2017). Pediatric robotics and ethics: The robot is ready to see you now, but should it be trusted? In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: New challenges in philosophy, law, and society (pp. 127–141). Oxford: Oxford Univ. Press.

    Google Scholar 

  • Boudette, N. (2017). Tesla's self-driving system cleared in deadly crash. The new Yorck times. https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html?_r=0. Accessed 27 February 2017.

  • Bratman, M. (1999). Faces of Intention: Selected Essays on Intention and Agency. Cambridge: Cambridge Univ. Press.

    Book  Google Scholar 

  • Dennett, D. (1989). The intentional stance. Cambridge Mass: MIT Press.

    Google Scholar 

  • ERTRAC. (2017). Automated Driving Roadmap: Version 7.0. http://www.ertrac.org/uploads/documentsearch/id48/ERTRAC_Automated_Driving_2017.pdf. Accessed 25 November 2017.

  • Frewer, L. J., Howard, C., & Shepherd, R. (1997). Public concerns in the United Kingdom about general and specific applications of genetic engineering: Risk, benefit, and ethics. Science Technology Hum n Values, 22, 98–124. https://doi.org/10.1177/016224399702200105.

    Article  Google Scholar 

  • Gibson, J. J. (1986). The ecological approach to visual perception. New York: Psychology Press.

    Google Scholar 

  • Gilbert, M. (1990). Walking together: A paradigmatic social phenomenon. Midwest Studies in Philosophy, 15, 1–14.

    Article  Google Scholar 

  • Goddard, T., Dill, J., & Monsere, C. (2016). Driver Attitudes About Bicyclists: Negative Evaluations of Rule-Following and Predictability. Transportation Research Board 95th Annual Meeting.

  • Golson, J. (2017). Driver in fatal tesla autopilot crash had seven seconds to take action. The Verge. http://www.theverge.com/2017/1/19/14326604/tesla-autopilot-crash-driver-seven-seconds-inattentive-nhtsa. Accessed 23 March 2017.

  • Goodall, N. (2014). Machine Ethics and automated vehicles. In G. Meyer & S. Beiker (Eds.), Road vehicle automation (pp. 93–102, Lecture Notes in Mobility). Cham: Springer.

    Chapter  Google Scholar 

  • Goodrich, M. A., & Schultz, A. C. (2007). Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction, 1, 203–275. https://doi.org/10.1561/1100000005.

    Article  Google Scholar 

  • Gosepath, S. (1992). Aufgeklärtes Eigeninteresse: Eine Theorie theoretischer und praktischer Rationalität. Frankfurt/Main: Suhrkamp.

    Google Scholar 

  • Habermas, J. (1994). Three normative models of democracy. Constellations, 1(1), 1–10.

    Article  Google Scholar 

  • Hancock, P. (2014). (2014). Automation: How much is too much? Ergonomics, 57, 449–454. https://doi.org/10.1080/00140139.2013.816375.

    Article  Google Scholar 

  • Hansson, S. O. (2013). The ethics of risk: Ethical analysis in an uncertain world. New York: Palgrave Macmillan.

    Book  Google Scholar 

  • Jennings, N., Sycara, K., & Woolridge, M. (1998). A roadmap of agent Research and Development. Autonomous Agents and Multi-Agent Systems, 1, 7–38.

    Article  Google Scholar 

  • Kirkpatrick, J., Hahn, E., & Haufler, A. (2017). Trust and human-robot interactions. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: New challenges in philosophy, law, and society (pp. 142–156). Oxford: Oxford Univ. Press.

    Google Scholar 

  • Kyriakidis, M., Happee, R., & de Winter, J. C. F. (2015). Public opinion on automated driving: Results of an international questionnaire among 5000 respondents. Transportation Research Part F: Traffic Psychology and Behaviour, 32, 127–140. https://doi.org/10.1016/j.trf.2015.04.014.

    Article  Google Scholar 

  • Kyriakidis, M., de Winter, J. C. F., Stanton, N., Bellet, T., van Arem, B., Brookhuis, K., et al. (2017). A human factors perspective on automated driving. Theoretical Issues in Ergonomics Science, 53, 1–27. https://doi.org/10.1080/1463922X.2017.1293187.

    Article  Google Scholar 

  • Levin, S., & Woolf, N. (2016, July 1). Tesla driver killed while using autopilot was watching Harry potter, witness says. The Guardian. https://www.theguardian.com/technology/2016/jul/01/tesla-driver-killed-autopilot-self-driving-car-harry-potter. Accessed 2 June 2018.

  • Lin, P. (2013). The ethics of saving lives with autonomous cars is far murkier than you think. Wired. https://www.wired.com/2013/07/the-surprising-ethics-of-robot-cars/. Accessed 16 August 2017.

  • Lin, P. (2014, August 18). Here’s a Terrible Idea: Robot Cars with Adjustable Ethics Settings. Wired. http://www.wired.com/2014/ 8/heres-a-terrible-idea-robotcars-with-adjustable-ethics-settings/. Accessed 30 November 2017.

  • Lin, P. (2016). Is tesla responsible for the deadly crash on auto-pilot? Maybe. Forbes. https://www.forbes.com/sites/patricklin/2016/07/01/is-tesla-responsible-for-the-deadly-crash-on-auto-pilot-maybe/#23ec768b1c07. Accessed 27 February 2017.

  • Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot Ethics 2.0: New Challenges in Philosophy, Law, and Society. Oxford: Oxford Univ. Press.

    Google Scholar 

  • List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford Univ. Press.

    Book  Google Scholar 

  • Loh, J., & Loh, W. (2017). Autonomy and responsibility in hybrid systems: The example of autonomous cars. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: New challenges in philosophy, law, and society. Oxford UK: Oxford Univ. Press.

    Google Scholar 

  • Madigan, R., Louw, T., Dziennus, M., Graindorge, T., Ortega, E., Graindorge, M., et al. (2016). Acceptance of automated road transport systems (ARTS): An adaptation of the UTAUT model. Transportation Research Procedia, 14, 2217–2226. https://doi.org/10.1016/j.trpro.2016.05.237.

    Article  Google Scholar 

  • Merton, R. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894–904.

    Article  Google Scholar 

  • Meyer, G., & Beiker, S. (Eds.). (2016). Road Vehicle Automation 3, Lecture notes in mobility. Cham: Springer International Publishing; Imprint; Springer.

    Google Scholar 

  • Misselhorn, C. (2013). Robots as Moral Agents. In F. Rövekamp & B. Friederike (Eds.), Ethics in Science and Society: German and Japanese Views (pp. 30–42). München: Iudicum.

    Google Scholar 

  • Misselhorn, C. (2015). Collective agency and cooperation in natural and artificial systems. In C. Misselhorn (Ed.), Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (pp. 3–24). London: Springer.

    Chapter  Google Scholar 

  • Neuhäuser, C. (2015). Some Sceptical remarks regarding robot responsibility and a way forward. In C. Misselhorn (Ed.), Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (pp. 131–146). London: Springer.

    Chapter  Google Scholar 

  • NHTSA. (2014). Fatalities by State and Road Function Class. https://www-fars.nhtsa.dot.gov/States/StatesCrashesAndAllVictims.aspx. Accessed 23 April 2017.

  • Norcross, A. (1998). Great harms from small benefits grow: How death can be outweighed by headaches. Analysis, 58(2), 152–158.

    Article  Google Scholar 

  • Norman, D. A. (1988). The psychology of everyday things. New York, NY: Basic Books.

    Google Scholar 

  • NTMVSA. (1966). National Traffic and motor vehicle safety act: Public law 89–563.

  • Nyholm, S. (2017). Attributing agency to automated systems: Reflections on human–robot collaborations and responsibility-loci. Science and Engineering Ethics.

  • Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19, 1275–1289n.d.. https://doi.org/10.1007/s10677-016-9745-2.

  • Nyholm, S., & Smids, J. (2018). Automated cars meet human drivers: Responsible human-robot coordination and the ethics of mixed traffic. Ethics and Information Technology, 9, 332. https://doi.org/10.1007/s10676-018-9445-9.

    Article  Google Scholar 

  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230–253. https://doi.org/10.1518/001872097778543886.

    Article  Google Scholar 

  • Parfit, D. (1984). Reasons and persons. Oxford: Oxford Univ. Press.

    Google Scholar 

  • Parfit, D. (2011). On what matters. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Peden, M., Scurfield, R., Sleet, D., Mohan, D., Hyder, A., Jarawan, E., et al. (2004). World report on road traffic injury prevention. Geneva. http://cdrwww.who.int/violence_injury_prevention/publications/road_traffic/world_report/intro.pdf. Accessed 30 November 2017.

  • Rayna, T., Striukova, L., & Landau, S. (2009). Crossing the chasm or being crossed out: The case of digital audio players. International Journal of Actor-Network Theory and Technological Innovation, 1(3), 36–54.

    Article  Google Scholar 

  • Raz, J. (2005). The myth of instrumental rationality. Journal of Ethics and Social Philosophy, 1, 1–28. https://doi.org/10.26556/jesp.v1i1.1.

    Article  Google Scholar 

  • Renn, O. (2008). Risk governance: Coping with uncertainty in a complex world. London u.A.: Earthscan.

    Book  Google Scholar 

  • Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016). Overtrust of Robots in Emergency Evacuation Scenarios (pp. 101–108). Piscataway NJ: IEEE Press.

    Google Scholar 

  • Roeser, S., Hillerbrand, R., Sandin, P., & Peterson, M. (Eds.). (2012). Handbook of risk theory. Dordrecht: Springer Netherlands.

    Google Scholar 

  • Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press of Glencoe.

    Google Scholar 

  • SAE International. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. Washington DC: SAE International.

  • Santoni de Sio, F. (2016). Ethics and self-driving cars: A white paper on responsible innovation in automated driving systems. Dutch Ministry of Infrastructure and Environment Rijkswaterstaat.

  • Scanlon, T. (1998). What we owe to each other. Cambridge MA: Belknap Press of Harvard Univ. Press.

    Google Scholar 

  • Scanlon, T. (2007). Structural irrationality. In G. Brennan, R. Goodin, F. Jackson, & M. Smith (Eds.), Common minds: Themes from the philosophy of Philip Pettit (pp. 84–103). Oxford: Clarendon.

    Google Scholar 

  • Searle, J. (1990). Collective intentions and actions. In P. Cohen, J. Morgan, & M. Pollack (Eds.), Intentions in communication (pp. 401–415). Cambridge MA: MIT Press.

    Google Scholar 

  • Siebert, H. (2001). Der Kobra-Effekt: Wie man Irrwege der Wirtschaftspolitik vermeidet. München: Deutsche Verlags-Anstalt.

    Google Scholar 

  • Smith, M. (1994). The moral problem, Philosophical theory. Oxford: Blackwell.

    Google Scholar 

  • Stilgoe, J. (2017). Tesla crash report blames human error - this is a missed opportunity. The Guardian. https://www.theguardian.com/science/political-science/2017/jan/21/tesla-crash-report-blames-human-error-this-is-a-missed-opportunity. Accessed 23 March 2017.

  • Taebi, B. (2017). Bridging the gap between social acceptance and ethical acceptability. Risk Analysis, 37, 1817–1827. https://doi.org/10.1111/risa.12734.

    Article  Google Scholar 

  • Taurek, J. (1977). Should the numbers count? Philosophy and Public Affairs, 6(4), 293–316.

    Google Scholar 

  • Tennant, C., Howard, S., Franks, B., Bauer, M., Stares, S., Pansegrau, P., et al. (2016). Autnonomous Vehicles - Negotiating a Place on the Road: A study on how drivers feel about Interacting with Autonomous Vehicles on the road. http://www.lse.ac.uk/website-archive/newsAndMedia/PDF/AVs-negociating-a-place-on-the-road-1110.pdf. Accessed 16 August 2017.

  • Tesla Motors. (2016). A Tragic Loss. https://www.tesla.com/blog/tragic-loss. Accessed 23 March 2017.

  • van de Poel, I., Fahlquist, J. N., Doorn, N., Zwart, S., & Royakkers, L. (2011). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.

    Article  Google Scholar 

  • Voorhoeve, A. (2014). How should we aggregate competing claims? Ethics, 125(1), 64–87.

    Article  Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford, New York: Oxford University Press.

    Book  Google Scholar 

  • Weber, M. (1978). Economy and Society. Berkeley CA: University of California Press.

    Google Scholar 

  • Winter, J., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transportation Research Part F: Traffic Psychology and Behaviour, 27, 196–217. https://doi.org/10.1016/j.trf.2014.06.016.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wulf Loh.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Loh, W., Misselhorn, C. Autonomous Driving and Perverse Incentives. Philos. Technol. 32, 575–590 (2019). https://doi.org/10.1007/s13347-018-0322-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-018-0322-6

Keywords

Navigation