Skip to main content

Disparate Impact Diminishes Consumer Trust Even for Advantaged Users

  • Conference paper
  • First Online:
Book cover Persuasive Technology (PERSUASIVE 2021)

Abstract

Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers’ financial decision-making. Our results show that disparate impact decreased consumers’ trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.

T. Draws, Z. Szlávik and B. Timmermans—Current affiliation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Since conducting this study in June 2019, Figure Eight has been renamed to Appen. More information can be found at https://appen.com.

  2. 2.

    The null model in this procedure consisted of only an intercept.

References

  1. Ahmad, W.N.W., Ali, N.M.: A study on persuasive technologies: the relationship between user emotions, trust and persuasion. Int. J. Interact. Multimed. Artif. Intell. 5(1), 57–61 (2018). https://doi.org/10.9781/ijimai.2018.02.010

    Article  MathSciNet  Google Scholar 

  2. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: there’s software used across the country to predict future criminals and it’s biased against blacks. ProPublica (2019). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  3. Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C.H.: In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 35(3), 611–623 (2020). https://doi.org/10.1007/s00146-019-00931-w

    Article  Google Scholar 

  4. Arnold, M., et al.: FactSheets: increasing trust in AI services through supplier’s declarations of conformity. IBM J. Res. Dev. 63(4–5) (2019). https://doi.org/10.1147/JRD.2019.2942288

  5. Baeckström, Y., Silvester, J., Pownall, R.A.: Millionaire investors: financial advisors, attribution theory and gender differences. Eur. J. Financ. 24(15), 1333–1349 (2018). https://doi.org/10.1080/1351847X.2018.1438301

    Article  Google Scholar 

  6. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev. 104(671), 671–732 (2016)

    Google Scholar 

  7. Bellamy, R.K., et al.: AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63(4–5) (2019). https://doi.org/10.1147/JRD.2019.2942287

  8. Corbett-Davies, S., Goel, S.: The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv Preprint arXiv:1808.00023 (2018)

  9. Cramer, A.O.J., et al.: Hidden multiplicity in exploratory multiway ANOVA: prevalence and remedies. Psychon. Bull. Rev. 23(2), 640–647 (2015). https://doi.org/10.3758/s13423-015-0913-5

    Article  Google Scholar 

  10. Cummings, M.L.: Automation bias in intelligent time critical decision support systems. Collect. In: AIAA 1st Intelligent Systems Technical Conference, Technical Paper, vol. 2, pp. 557–562 (2004). https://doi.org/10.2514/6.2004-6313

  11. Diab, D.L., Pui, S.Y., Yankelevich, M., Highhouse, S.: Lay perceptions of selection decision aids in US and non-US samples. Int. J. Sel. Assess. 19(2), 209–216 (2011). https://doi.org/10.1111/j.1468-2389.2011.00548.x

  12. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114–126 (2015). https://doi.org/10.1037/xge0000033

    Article  Google Scholar 

  13. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268 (2015)

    Google Scholar 

  14. Ham, J., van den Bos, K.: Not fair for me! the influence of personal relevance on social justice inferences. J. Exp. Soc. Psychol. 44(3), 699–705 (2008). https://doi.org/10.1016/j.jesp.2007.04.009

    Article  Google Scholar 

  15. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances Neural Information Processing Systems, pp. 3323–3331 (2016)

    Google Scholar 

  16. JASP Team: JASP (Version 0.14) (2020)

    Google Scholar 

  17. Jeffreys, H.: Theory of Probability. Oxford University Press, Oxford (1939)

    MATH  Google Scholar 

  18. Lee, M.D., Wagenmakers, E.J.: Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press, Cambridge (2014). https://doi.org/10.1017/CBO9781139087759

    Book  Google Scholar 

  19. Lee, M.K.: Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5(1), 1–16 (2018). https://doi.org/10.1177/2053951718756684

    Article  Google Scholar 

  20. Lieber, R.: Financial Advice for People Who Aren’t Rich, April 2014. https://www.nytimes.com/2014/04/12/your-money/start-ups-offer-financial-advice-to-people-who-arent-rich.html

  21. Mary, J.J., Calauzènes, C., Karoui, N.E.: Fairness-aware learning for continuous attributes and treatments. In: ICML 97, pp. 4382–4391 (2019). http://proceedings.mlr.press/v97/mary19a.html

  22. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A Survey on Bias and Fairness in Machine Learning. arXiv Preprint arXiv:1908.09635 (2019)

  23. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)

    Google Scholar 

  24. Mullainathan, S., Noeth, M., Schoar, A.: The market for financial advice: an audit study. SSRN Electron. J. (2012). https://doi.org/10.2139/ssrn.1572334

    Article  Google Scholar 

  25. Napierala, M.A.: What Is the Bonferroni correction? (2012). http://www.aaos.org/news/aaosnow/apr12/research7.asp

  26. Nickel, P., Spahn, A.: Trust, discourse ethics, and persuasive technology. In: Persuasive Technology: Design for Health and Safety, 7th International Conference Persuasive Technology 2012, pp. 37–40. Linköping University Electronic Press (2012)

    Google Scholar 

  27. Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems—an introductory survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 10(3), 1–14 (2020). https://doi.org/10.1002/widm.1356

  28. Oinas-Kukkonen, H., Harjumaa, M.: Towards deeper understanding of persuasion in software and information systems. In: Proceedings of 1st International Conference on Advanced Computer Interaction, ACHI 2008 (2008). https://doi.org/10.1109/ACHI.2008.31

  29. Önkal, D., Goodwin, P., Thomson, M., Gönül, S., Pollock, A.: The relative influence of advice from human experts and statistical methods on forecast adjustments. J. Behav. Decis. Mak. 22(4), 390–409 (2009). https://doi.org/10.1002/bdm.637

    Article  Google Scholar 

  30. Orji, R., Moffatt, K.: Persuasive technology for health and wellness: State-of-the-art and emerging trends. Health Inform. J. 24(1), 66–91 (2018). https://doi.org/10.1177/1460458216650979

    Article  Google Scholar 

  31. Otterbacher, J., Checco, A., Demartini, G., Clough, P.: Investigating user perception of gender bias in image search: the role of sexism. In: 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018, pp. 933–936 (2018). https://doi.org/10.1145/3209978.3210094

  32. Promberger, M., Baron, J.: Do patients trust computers? J. Behav. Decis. Mak. 19(5), 455–468 (2006). https://doi.org/10.1002/bdm.542

    Article  Google Scholar 

  33. Purpura, S., Schwanda, V., Williams, K., Stubler, W., Sengers, P.: Fit4Life: the design of a persuasive technology promoting healthy behavior and ideal weight. In: Proceedings of SIGCHI Conference on Human Factors in Computing Systems, pp. 423–432 (2011)

    Google Scholar 

  34. Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–133 (2019)

    Google Scholar 

  35. Sattarov, F., Nagel, S.: Building trust in persuasive gerontechnology: user-centric and institution-centric approaches. Gerontechnology 18(1), 1–14 (2019). https://doi.org/10.4017/gt.2019.18.1.001.00

    Article  Google Scholar 

  36. Smith, J., Sonboli, N., Fiesler, C., Burke, R.: Exploring user opinions of fairness in recommender systems. In: CHI 2020 Workshop Human-Centered Approaches to Fair Responsible AI (2020). http://arxiv.org/abs/2003.06461

  37. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 272–283 (2020). https://doi.org/10.1145/3351095.3372834

  38. Van Den Bergh, D., et al.: A tutorial on conducting and interpreting a Bayesian ANOVA in JASP. Annee Psychol. 120(1), 73–96 (2020). https://doi.org/10.3917/anpsy1.201.0073

    Article  Google Scholar 

  39. Varshney, K.R.: Trustworthy machine learning and artificial intelligence. XRDS Crossroads ACM Mag. Students 25(3) (2019). https://doi.org/10.1145/3313109

  40. Varshney, K.R.: On mismatched detection and safe, trustworthy machine learning. In: 2020 54th Annual Conference on Information Sciences and Systems, CISS 2020 (2020). https://doi.org/10.1109/CISS48834.2020.1570627767

  41. Verbeek, P.P.: Persuasive technology and moral responsibility toward an ethical framework for persuasive technologies. Persuasive 6, 1–15 (2006)

    Google Scholar 

  42. Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of International Workshop on Software Fairness, FairWare 2018, pp. 1–7. Association for Computing Machinery, New York (2018). https://doi.org/10.1145/3194770.3194776

  43. Vigdor, N.: Apple card investigated after gender discrimination complaints. New York Times (2019)

    Google Scholar 

  44. Wagenmakers, E.-J., et al.: Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychon. Bull. Rev. 25(1), 35–57 (2017). https://doi.org/10.3758/s13423-017-1343-3

    Article  Google Scholar 

  45. Woodruff, A., Fox, S.E., Rousso-Schindler, S., Warshaw, J.: A qualitative exploration of perceptions of algorithmic fairness. In: Proceedings of the Conference on Human Factors in Computing Systems, vol. 2018-April, pp. 1–14 (2018). https://doi.org/10.1145/3173574.3174230

  46. Yang, Q., Banovic, N., Zimmerman, J.: Mapping machine learning advances from HCI research to reveal starting places for design innovation. In: Proceedings of the Conference on Human Factors in Computing Systems, vol. 2018-April, pp. 1–11 (2018). https://doi.org/10.1145/3173574.3173704

  47. Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: 26th International World Wide Web Conference on WWW 2017, pp. 1171–1180 (2017). https://doi.org/10.1145/3038912.3052660

Download references

Acknowledgements

This research has been supported by the Think Forward Initiative (a partnership between ING Bank, Deloitte, Dell Technologies, Amazon Web Services, IBM, and the Center for Economic Policy Research – CEPR). The views and opinions expressed in this paper are solely those of the authors and do not necessarily reflect the official policy or position of the Think Forward Initiative or any of its partners.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Draws .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Draws, T., Szlávik, Z., Timmermans, B., Tintarev, N., Varshney, K.R., Hind, M. (2021). Disparate Impact Diminishes Consumer Trust Even for Advantaged Users. In: Ali, R., Lugrin, B., Charles, F. (eds) Persuasive Technology. PERSUASIVE 2021. Lecture Notes in Computer Science(), vol 12684. Springer, Cham. https://doi.org/10.1007/978-3-030-79460-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-79460-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-79459-0

  • Online ISBN: 978-3-030-79460-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics