Skip to main content

Advertisement

Log in

Using the Double Transparency of Autonomous Vehicles to Increase Fairness and Social Welfare

  • Research Article
  • Published:
Customer Needs and Solutions Aims and scope Submit manuscript

Abstract

Fully autonomous vehicles (AVs) create double transparency regarding human driving decisions. Opaque decision rules in the human mind have become transparent in AVs, and in turn, can be made transparent to third parties. This double transparency is creating an unprecedented opportunity to regulate driving decision rules to eliminate unreasonable selfishness and increase fairness and social welfare because AVs can be programmed to follow regulations 100% of the time. In this experimental ethics study, we performed an incentive aligned online experiment to examine humans’ willingness to sacrifice other people’s lives to protect their own in five different accident scenarios and to investigate the potential for AV regulation to curb unreasonable selfishness, thereby increasing fairness and social welfare. Our results reveal the need to regulate rules governing AV driving decisions; yet, a full transparency policy for decision algorithms may not necessarily lead to desired social effects. Thus, regulations should be tailored to different scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. 10 CFR 1047.7 - Use of deadly force. (n.d.) “Deadly force means that force which a reasonable person would consider likely to cause death or serious bodily harm. https://www.law.cornell.edu/cfr/text/10/1047.7. Accessed August 2018

  2. Bonnefon JF, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science 352(6293):1573–1576

    Article  Google Scholar 

  3. DellaVigna S, List JA, Malmendier U (2012) Testing for altruism and social pressure in charitable giving. Q J Econ 127(1):1–56

    Article  Google Scholar 

  4. European Union, (2018)“General data protection regulation” (regulation (EU) 2016/679)

  5. Fagnant DJ, Kockelman K (2015) Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transport Res Part A: Policy and Practice 77:167–181

    Google Scholar 

  6. Federal Ministry of Transport and Infrastructure, (2017) “Ethics commission on automated and connected driving presents report,” (BMVI)

  7. Hauser JR, (2015) Testimony in the matter of determination of loyalty rates for digital performance in sound recordings and ephemeral recordings (WEB IV, No. 14-CRB-0001-WR, 23)

  8. House of lords select committee on artificial intelligence, (2018) “ai in the UK: ready, willing and able?” (Report of Session 2017–19, 16)

  9. McFadden DL, (2014) Testimony in the matter of determination of loyalty rates for digital performance in sound recordings and ephemeral recordings (WEB IV, No. 14-CRB-0001-WR)

  10. Mill JS, (2016) “Utilitarianism,” in Seven Masterpieces of Philosophy, S. Cahn, Ed. (Routledge), chap. VII

  11. Morris DZ, (2016) “Mercedes-Benz’s self-driving cars would choose passenger lives over bystanders,” Fortune

  12. Singh S, (2015) “Critical reasons for crashes investigated in the National Motor Vehicle Crash Causation Survey” (Traffic safety facts crash stats, rep. DOT HS 812 115, National Highway Traffic Safety Administration)

  13. The fully autonomous vehicles that we discuss in this paper are categorized as level 5, (2014) as defined by SAE International’s J3016 standard, issued in January

  14. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, (2017) “Ethically aligned design: a vision for prioritizing human well-being with autonomous and intelligent systems, v. 2” (IEEE)

  15. U.S. Department of Transportation, (2016) “Federal automated vehicles policy” (U. S. Department of Transportation, September

  16. U.S. Department of Transportation, (2018) “Preparing for the future of transportation: automated vehicles 3.0 (AV 3.0),” (U.S. Department of Transportation)

  17. World Health Organization, (WHO), “Road traffic injuries” (2018); http://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries. [the easiest access to this source is via the URL]

Download references

Acknowledgments

We gratefully acknowledge support through Institute for Sustainable Innovation and Growth, School of Management, Fudan University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Xu.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Informed Consent

Informed consent was obtained from all individual participants included in the study via electronic signatures in the survey and possible consequences of the studies were explained.

Electronic supplementary material

ESM 1

(XLSX 179 kb)

Appendices

Appendix 1. Survey

1.1 Neither Party at Fault

A driverless car with one passenger drives at the speed limit down a main road. However, due to a traffic light failure, the driverless car will hit N pedestrians. The AV has two options: (a) stay on the path and move forward, in which case the car will hit and cause serious harm to the pedestrians, but the passenger in the AV will not be harmed, or (b) swerve off to the other side of the road, hit a barrier, and roll over, in which case the passenger will suffer serious harm, but the pedestrians will remain unharmed.

1.2 AV at Fault

A driverless car with one passenger drives at the speed limit down a main road. Due to a brake failure, the driverless car will run into N pedestrians. The AV has two options: (a) stay on the path and move forward, in which case the car will hit and cause serious harm to the pedestrians, but the passenger in the AV will not be harmed, or (b) swerve off to the other side of the road, hit a barrier, and roll over, in which case the passenger will suffer serious harm, but the pedestrians will remain unharmed.

1.3 Other Party at Fault

A driverless car with one passenger drives at the speed limit down a main road. Suddenly, N pedestrians ignore the red light, and the driverless car will hit N pedestrians. The AV has two options: (a) stay on the path and move forward, in which case the car will hit and cause serious harm to the pedestrians, but the passenger in the AV will not be harmed, or (b) swerve off to the other side of the road, hit a barrier, and roll over, in which case the passenger will suffer serious harm, but the pedestrians will remain unharmed.

1.4 Both Parties at Fault

A driverless car with one passenger drives at the speed limit down a main road. The driverless car’s brakes fail, and N pedestrians suddenly ignore a red light. The driverless car will run into N pedestrians. The AV has two options: (a) stay on the path and move forward, in which case the car will hit and cause serious harm to the pedestrians, but the passenger in the AV will not be harmed, or (b) swerve off to the other side of the road, hit a barrier, and roll over, in which case the passenger will suffer serious harm, but the pedestrians will remain unharmed.

1.5 “Good Samaritan”

A driverless car with a passenger sees that another vehicle ahead is accelerating towards N pedestrians on the sidewalk. That vehicle will rush into N pedestrians, causing serious harm. The AV has two options: (a) speed up and hit the vehicle to incapacitate it and avoid pedestrian casualties, seriously harming its passenger in the process, or (b) not intervene and let the accident happen to prevent harm to its passenger.

1.6 Measurement Items

Each participant was presented with all scenarios, one at a time in random order, and asked to state his or her WTS, usage intentions and purchase intentions if the stated WTS could be set, and perceptions of fairness under policies based on full transparency and non-transparency. WTS is a quantitative measurement defined as the maximum number of humans a person will harm in order to save himself/herself. The participants were asked to set their WTS between 0 and 99, with 99 representing an infinitely large WTS.

1.1 You are the sole passenger in an AV. Under the full transparency policy, how would you set WTS in this scenario?

1.2 You are the sole passenger in an AV. Under the non-transparency policy, how would you set WTS in this scenario?

2.1 Under the full transparency policy, would you ride in a driverless car with the same WTS setting as yours for this scenario, assuming you are satisfied in all other aspects of the driverless car? (Please rate using a scale ranging from 1 to 9: 1—definitely would not; 9—definitely would).

2.2 Under the non-transparency policy, would you ride in a driverless car with the same WTS setting as yours for this scenario, assuming you are satisfied in all other aspects of the driverless car? (Please rate using a scale ranging from 1 to 9: 1—definitely would not; 9—definitely would).

3.1 Under the full transparency policy, would you buy a driverless car with the same WTS setting as yours for this scenario, assuming you are satisfied in all other aspects of the driverless car? (Please rate using a scale ranging from 1 to 9: 1—definitely would not; 9—definitely would).

3.2 Under the non-transparency policy, would you buy a driverless car with the same WTS setting as yours for this scenario, assuming you are satisfied in all other aspects of the driverless car? (Please rate using a scale ranging from 1 to 9: 1—definitely would not; 9—definitely would).

4.1 In this scenario, please rate the fairness of the full transparency policy. (Please rate using a scale ranging from 1 to 9: 1—very unfair; 9—very fair).

4.2 In this scenario, please rate the fairness of the non-transparency policy. (Please rate using a scale ranging from 1 to 9: 1—very unfair; 9—very fair).

1.7 Policy Types

Full transparency policy: The government regulates that every driverless car is required to disclose its “Willingness to Sacrifice (WTS)” to all road users.

Non-transparency policy: Driverless cars do not need to disclose “Willingness to Sacrifice (WTS)”.

Appendix 2. Data and Regression

1.1 “Good Samaritan” Scenario

In the incentive aligned group, 60.86% of participants (responding as passengers) allowed their AV to act as a “Good Samaritan” to save others (Fig. 3). The mean WTS is 70.00 (the intercept of the right censored regression) and the median is 49 for this scenario; among those willing to consider such an intervention (WTS < 99), the mean is 24.31 and median is 9.

Fig. 3
figure 3

The distribution of willingness to sacrifice (WTS) in the “Good Samaritan” scenario (incentive aligned group)

Full transparency also increases perceptions of fairness when AVs are uninvolved (p < 0.001). The results of the effect of transparency on social welfare are consistent with other scenarios (Fig. 4), except there are fewer participants in segment 1 in the “Good Samaritan” scenario (p = 0.037).

Fig. 4
figure 4

Changes in usage intentions, purchase intentions, and perceptions of fairness under full transparency policy in the “Good Samaritan” scenario. Boxes show the 95% CI of the mean

With regard to the “Good Samaritan” scenario, our findings reveal an important and challenging regulatory quandary: Should society allow such preferences to be programmed into an AV? While it may not be reasonable to set a ceiling in this scenario since the AV is not involved and should not be forced to intervene, should society set a floor value for WTS to prevent overzealous actions by AVs? The decision is a double-edged sword for society. Allowing such actions would decrease total fatalities from a societal perspective, but would cross an important line regarding the roles intelligent robots such as AVs should play in human society. Allowing AVs to act as “Good Samaritans” through the use of deadly force sets a precedent with far-reaching ramifications for future robot-human relationships.

1.2 Comparing the Incentive Aligned Group and Control Group

Respondents in the control group penalized the other party less when the other party was at fault (17.69 vs. 20.02 for the incentive aligned group) (Table 1 and Fig. 5). The average WTS for the control group increased by 50.69% (vs. a 59.78% increase for the incentive aligned group) compared to the neither party at fault scenario (Table 1 and Fig. 5). The incentive and control groups also behave slightly differently under the full transparency policy (Figs. 2 and 6).

Fig. 5
figure 5

The distribution of willingness to sacrifice (WTS) in five scenarios (control group)

Fig. 6
figure 6

Changes in usage intentions, purchase intentions, and perceptions of fairness under full transparency policy for the control group. Boxes show the 95% CI of the mean

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, J., Ding, M. Using the Double Transparency of Autonomous Vehicles to Increase Fairness and Social Welfare. Cust. Need. and Solut. 6, 26–35 (2019). https://doi.org/10.1007/s40547-019-00093-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40547-019-00093-2

Keywords

Navigation