Advertisement

Philosophy & Technology

, Volume 32, Issue 4, pp 575–590 | Cite as

Autonomous Driving and Perverse Incentives

  • Wulf LohEmail author
  • Catrin Misselhorn
Research Article

Abstract

This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of unintended side effect that must be balanced against the main goal or value to be realized by an action, technology, or policy. Instead, they directly cause the primary goals of the actors—i.e., the goals that they ultimately pursue with the action, technology, or policy—to be “worse achieved” (Parfit). In this paper, we elaborate on this definition and distinguish three ideal-typical phases of adverse incentives, where only in the last one the threshold for a perverse incentive is crossed. In addition, we discuss different possible relevant actors and their goals in implementing autonomous vehicles. We conclude that even if some actors do not pursue traffic safety as their primary goal, as part of a responsibility network they incur the responsibility to act on the common primary goal of the network, which we argue to be traffic safety.

Keywords

Autonomous driving Perverse incentives Robot ethics Responsibility networks 

References

  1. Aven, T. (2016). (2016). Risk assessment and risk management: Review of recent advances on their foundation. European Journal of Operational Research, 253, 1–13.  https://doi.org/10.1016/j.ejor.2015.12.023.CrossRefGoogle Scholar
  2. Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352, 1573–1576.  https://doi.org/10.1126/science.aaf2654.CrossRefGoogle Scholar
  3. Borenstein, J., Howard, A., & Wagner, A. (2017). Pediatric robotics and ethics: The robot is ready to see you now, but should it be trusted? In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: New challenges in philosophy, law, and society (pp. 127–141). Oxford: Oxford Univ. Press.Google Scholar
  4. Boudette, N. (2017). Tesla's self-driving system cleared in deadly crash. The new Yorck times. https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html?_r=0. Accessed 27 February 2017.
  5. Bratman, M. (1999). Faces of Intention: Selected Essays on Intention and Agency. Cambridge: Cambridge Univ. Press.CrossRefGoogle Scholar
  6. Dennett, D. (1989). The intentional stance. Cambridge Mass: MIT Press.Google Scholar
  7. ERTRAC. (2017). Automated Driving Roadmap: Version 7.0. http://www.ertrac.org/uploads/documentsearch/id48/ERTRAC_Automated_Driving_2017.pdf. Accessed 25 November 2017.
  8. Frewer, L. J., Howard, C., & Shepherd, R. (1997). Public concerns in the United Kingdom about general and specific applications of genetic engineering: Risk, benefit, and ethics. Science Technology Hum n Values, 22, 98–124.  https://doi.org/10.1177/016224399702200105.CrossRefGoogle Scholar
  9. Gibson, J. J. (1986). The ecological approach to visual perception. New York: Psychology Press.Google Scholar
  10. Gilbert, M. (1990). Walking together: A paradigmatic social phenomenon. Midwest Studies in Philosophy, 15, 1–14.CrossRefGoogle Scholar
  11. Goddard, T., Dill, J., & Monsere, C. (2016). Driver Attitudes About Bicyclists: Negative Evaluations of Rule-Following and Predictability. Transportation Research Board 95th Annual Meeting.Google Scholar
  12. Golson, J. (2017). Driver in fatal tesla autopilot crash had seven seconds to take action. The Verge. http://www.theverge.com/2017/1/19/14326604/tesla-autopilot-crash-driver-seven-seconds-inattentive-nhtsa. Accessed 23 March 2017.
  13. Goodall, N. (2014). Machine Ethics and automated vehicles. In G. Meyer & S. Beiker (Eds.), Road vehicle automation (pp. 93–102, Lecture Notes in Mobility). Cham: Springer.CrossRefGoogle Scholar
  14. Goodrich, M. A., & Schultz, A. C. (2007). Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction, 1, 203–275.  https://doi.org/10.1561/1100000005.CrossRefGoogle Scholar
  15. Gosepath, S. (1992). Aufgeklärtes Eigeninteresse: Eine Theorie theoretischer und praktischer Rationalität. Frankfurt/Main: Suhrkamp.Google Scholar
  16. Habermas, J. (1994). Three normative models of democracy. Constellations, 1(1), 1–10.CrossRefGoogle Scholar
  17. Hancock, P. (2014). (2014). Automation: How much is too much? Ergonomics, 57, 449–454.  https://doi.org/10.1080/00140139.2013.816375.CrossRefGoogle Scholar
  18. Hansson, S. O. (2013). The ethics of risk: Ethical analysis in an uncertain world. New York: Palgrave Macmillan.CrossRefGoogle Scholar
  19. Jennings, N., Sycara, K., & Woolridge, M. (1998). A roadmap of agent Research and Development. Autonomous Agents and Multi-Agent Systems, 1, 7–38.CrossRefGoogle Scholar
  20. Kirkpatrick, J., Hahn, E., & Haufler, A. (2017). Trust and human-robot interactions. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: New challenges in philosophy, law, and society (pp. 142–156). Oxford: Oxford Univ. Press.Google Scholar
  21. Kyriakidis, M., Happee, R., & de Winter, J. C. F. (2015). Public opinion on automated driving: Results of an international questionnaire among 5000 respondents. Transportation Research Part F: Traffic Psychology and Behaviour, 32, 127–140.  https://doi.org/10.1016/j.trf.2015.04.014.CrossRefGoogle Scholar
  22. Kyriakidis, M., de Winter, J. C. F., Stanton, N., Bellet, T., van Arem, B., Brookhuis, K., et al. (2017). A human factors perspective on automated driving. Theoretical Issues in Ergonomics Science, 53, 1–27.  https://doi.org/10.1080/1463922X.2017.1293187.CrossRefGoogle Scholar
  23. Levin, S., & Woolf, N. (2016, July 1). Tesla driver killed while using autopilot was watching Harry potter, witness says. The Guardian. https://www.theguardian.com/technology/2016/jul/01/tesla-driver-killed-autopilot-self-driving-car-harry-potter. Accessed 2 June 2018.
  24. Lin, P. (2013). The ethics of saving lives with autonomous cars is far murkier than you think. Wired. https://www.wired.com/2013/07/the-surprising-ethics-of-robot-cars/. Accessed 16 August 2017.
  25. Lin, P. (2014, August 18). Here’s a Terrible Idea: Robot Cars with Adjustable Ethics Settings. Wired. http://www.wired.com/2014/ 8/heres-a-terrible-idea-robotcars-with-adjustable-ethics-settings/. Accessed 30 November 2017.
  26. Lin, P. (2016). Is tesla responsible for the deadly crash on auto-pilot? Maybe. Forbes. https://www.forbes.com/sites/patricklin/2016/07/01/is-tesla-responsible-for-the-deadly-crash-on-auto-pilot-maybe/#23ec768b1c07. Accessed 27 February 2017.
  27. Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot Ethics 2.0: New Challenges in Philosophy, Law, and Society. Oxford: Oxford Univ. Press.Google Scholar
  28. List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford Univ. Press.CrossRefGoogle Scholar
  29. Loh, J., & Loh, W. (2017). Autonomy and responsibility in hybrid systems: The example of autonomous cars. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: New challenges in philosophy, law, and society. Oxford UK: Oxford Univ. Press.Google Scholar
  30. Madigan, R., Louw, T., Dziennus, M., Graindorge, T., Ortega, E., Graindorge, M., et al. (2016). Acceptance of automated road transport systems (ARTS): An adaptation of the UTAUT model. Transportation Research Procedia, 14, 2217–2226.  https://doi.org/10.1016/j.trpro.2016.05.237.CrossRefGoogle Scholar
  31. Merton, R. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894–904.CrossRefGoogle Scholar
  32. Meyer, G., & Beiker, S. (Eds.). (2016). Road Vehicle Automation 3, Lecture notes in mobility. Cham: Springer International Publishing; Imprint; Springer.Google Scholar
  33. Misselhorn, C. (2013). Robots as Moral Agents. In F. Rövekamp & B. Friederike (Eds.), Ethics in Science and Society: German and Japanese Views (pp. 30–42). München: Iudicum.Google Scholar
  34. Misselhorn, C. (2015). Collective agency and cooperation in natural and artificial systems. In C. Misselhorn (Ed.), Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (pp. 3–24). London: Springer.CrossRefGoogle Scholar
  35. Neuhäuser, C. (2015). Some Sceptical remarks regarding robot responsibility and a way forward. In C. Misselhorn (Ed.), Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (pp. 131–146). London: Springer.CrossRefGoogle Scholar
  36. NHTSA. (2014). Fatalities by State and Road Function Class. https://www-fars.nhtsa.dot.gov/States/StatesCrashesAndAllVictims.aspx. Accessed 23 April 2017.
  37. Norcross, A. (1998). Great harms from small benefits grow: How death can be outweighed by headaches. Analysis, 58(2), 152–158.CrossRefGoogle Scholar
  38. Norman, D. A. (1988). The psychology of everyday things. New York, NY: Basic Books.Google Scholar
  39. NTMVSA. (1966). National Traffic and motor vehicle safety act: Public law 89–563.Google Scholar
  40. Nyholm, S. (2017). Attributing agency to automated systems: Reflections on human–robot collaborations and responsibility-loci. Science and Engineering Ethics.Google Scholar
  41. Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19, 1275–1289n.d..  https://doi.org/10.1007/s10677-016-9745-2.
  42. Nyholm, S., & Smids, J. (2018). Automated cars meet human drivers: Responsible human-robot coordination and the ethics of mixed traffic. Ethics and Information Technology, 9, 332.  https://doi.org/10.1007/s10676-018-9445-9.CrossRefGoogle Scholar
  43. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39, 230–253.  https://doi.org/10.1518/001872097778543886.CrossRefGoogle Scholar
  44. Parfit, D. (1984). Reasons and persons. Oxford: Oxford Univ. Press.Google Scholar
  45. Parfit, D. (2011). On what matters. Oxford: Oxford University Press.CrossRefGoogle Scholar
  46. Peden, M., Scurfield, R., Sleet, D., Mohan, D., Hyder, A., Jarawan, E., et al. (2004). World report on road traffic injury prevention. Geneva. http://cdrwww.who.int/violence_injury_prevention/publications/road_traffic/world_report/intro.pdf. Accessed 30 November 2017.
  47. Rayna, T., Striukova, L., & Landau, S. (2009). Crossing the chasm or being crossed out: The case of digital audio players. International Journal of Actor-Network Theory and Technological Innovation, 1(3), 36–54.CrossRefGoogle Scholar
  48. Raz, J. (2005). The myth of instrumental rationality. Journal of Ethics and Social Philosophy, 1, 1–28.  https://doi.org/10.26556/jesp.v1i1.1.CrossRefGoogle Scholar
  49. Renn, O. (2008). Risk governance: Coping with uncertainty in a complex world. London u.A.: Earthscan.CrossRefGoogle Scholar
  50. Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016). Overtrust of Robots in Emergency Evacuation Scenarios (pp. 101–108). Piscataway NJ: IEEE Press.Google Scholar
  51. Roeser, S., Hillerbrand, R., Sandin, P., & Peterson, M. (Eds.). (2012). Handbook of risk theory. Dordrecht: Springer Netherlands.Google Scholar
  52. Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press of Glencoe.Google Scholar
  53. SAE International. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. Washington DC: SAE International.Google Scholar
  54. Santoni de Sio, F. (2016). Ethics and self-driving cars: A white paper on responsible innovation in automated driving systems. Dutch Ministry of Infrastructure and Environment Rijkswaterstaat.Google Scholar
  55. Scanlon, T. (1998). What we owe to each other. Cambridge MA: Belknap Press of Harvard Univ. Press.Google Scholar
  56. Scanlon, T. (2007). Structural irrationality. In G. Brennan, R. Goodin, F. Jackson, & M. Smith (Eds.), Common minds: Themes from the philosophy of Philip Pettit (pp. 84–103). Oxford: Clarendon.Google Scholar
  57. Searle, J. (1990). Collective intentions and actions. In P. Cohen, J. Morgan, & M. Pollack (Eds.), Intentions in communication (pp. 401–415). Cambridge MA: MIT Press.Google Scholar
  58. Siebert, H. (2001). Der Kobra-Effekt: Wie man Irrwege der Wirtschaftspolitik vermeidet. München: Deutsche Verlags-Anstalt.Google Scholar
  59. Smith, M. (1994). The moral problem, Philosophical theory. Oxford: Blackwell.Google Scholar
  60. Stilgoe, J. (2017). Tesla crash report blames human error - this is a missed opportunity. The Guardian. https://www.theguardian.com/science/political-science/2017/jan/21/tesla-crash-report-blames-human-error-this-is-a-missed-opportunity. Accessed 23 March 2017.
  61. Taebi, B. (2017). Bridging the gap between social acceptance and ethical acceptability. Risk Analysis, 37, 1817–1827.  https://doi.org/10.1111/risa.12734.CrossRefGoogle Scholar
  62. Taurek, J. (1977). Should the numbers count? Philosophy and Public Affairs, 6(4), 293–316.Google Scholar
  63. Tennant, C., Howard, S., Franks, B., Bauer, M., Stares, S., Pansegrau, P., et al. (2016). Autnonomous Vehicles - Negotiating a Place on the Road: A study on how drivers feel about Interacting with Autonomous Vehicles on the road. http://www.lse.ac.uk/website-archive/newsAndMedia/PDF/AVs-negociating-a-place-on-the-road-1110.pdf. Accessed 16 August 2017.
  64. Tesla Motors. (2016). A Tragic Loss. https://www.tesla.com/blog/tragic-loss. Accessed 23 March 2017.
  65. van de Poel, I., Fahlquist, J. N., Doorn, N., Zwart, S., & Royakkers, L. (2011). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.CrossRefGoogle Scholar
  66. Voorhoeve, A. (2014). How should we aggregate competing claims? Ethics, 125(1), 64–87.CrossRefGoogle Scholar
  67. Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford, New York: Oxford University Press.CrossRefGoogle Scholar
  68. Weber, M. (1978). Economy and Society. Berkeley CA: University of California Press.Google Scholar
  69. Winter, J., Happee, R., Martens, M. H., & Stanton, N. A. (2014). Effects of adaptive cruise control and highly automated driving on workload and situation awareness: A review of the empirical evidence. Transportation Research Part F: Traffic Psychology and Behaviour, 27, 196–217.  https://doi.org/10.1016/j.trf.2014.06.016.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.University of StuttgartDepartment of PhilosophyStuttgartGermany

Personalised recommendations