Skip to main content

Disparate Impact Diminishes Consumer Trust Even for Advantaged Users

Part of the Lecture Notes in Computer Science book series (LNISA,volume 12684)


Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers’ financial decision-making. Our results show that disparate impact decreased consumers’ trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.


  • Disparate impact
  • Algorithmic fairness
  • Consumer trust

T. Draws, Z. Szlávik and B. Timmermans—Current affiliation.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-79460-6_11
  • Chapter length: 15 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   79.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-79460-6
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   99.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.


  1. 1.

    Since conducting this study in June 2019, Figure Eight has been renamed to Appen. More information can be found at

  2. 2.

    The null model in this procedure consisted of only an intercept.


  1. Ahmad, W.N.W., Ali, N.M.: A study on persuasive technologies: the relationship between user emotions, trust and persuasion. Int. J. Interact. Multimed. Artif. Intell. 5(1), 57–61 (2018).

    MathSciNet  CrossRef  Google Scholar 

  2. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: there’s software used across the country to predict future criminals and it’s biased against blacks. ProPublica (2019).

  3. Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C.H.: In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 35(3), 611–623 (2020).

    CrossRef  Google Scholar 

  4. Arnold, M., et al.: FactSheets: increasing trust in AI services through supplier’s declarations of conformity. IBM J. Res. Dev. 63(4–5) (2019).

  5. Baeckström, Y., Silvester, J., Pownall, R.A.: Millionaire investors: financial advisors, attribution theory and gender differences. Eur. J. Financ. 24(15), 1333–1349 (2018).

    CrossRef  Google Scholar 

  6. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev. 104(671), 671–732 (2016)

    Google Scholar 

  7. Bellamy, R.K., et al.: AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63(4–5) (2019).

  8. Corbett-Davies, S., Goel, S.: The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv Preprint arXiv:1808.00023 (2018)

  9. Cramer, A.O.J., et al.: Hidden multiplicity in exploratory multiway ANOVA: prevalence and remedies. Psychon. Bull. Rev. 23(2), 640–647 (2015).

    CrossRef  Google Scholar 

  10. Cummings, M.L.: Automation bias in intelligent time critical decision support systems. Collect. In: AIAA 1st Intelligent Systems Technical Conference, Technical Paper, vol. 2, pp. 557–562 (2004).

  11. Diab, D.L., Pui, S.Y., Yankelevich, M., Highhouse, S.: Lay perceptions of selection decision aids in US and non-US samples. Int. J. Sel. Assess. 19(2), 209–216 (2011).

  12. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114–126 (2015).

    CrossRef  Google Scholar 

  13. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268 (2015)

    Google Scholar 

  14. Ham, J., van den Bos, K.: Not fair for me! the influence of personal relevance on social justice inferences. J. Exp. Soc. Psychol. 44(3), 699–705 (2008).

    CrossRef  Google Scholar 

  15. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances Neural Information Processing Systems, pp. 3323–3331 (2016)

    Google Scholar 

  16. JASP Team: JASP (Version 0.14) (2020)

    Google Scholar 

  17. Jeffreys, H.: Theory of Probability. Oxford University Press, Oxford (1939)

    MATH  Google Scholar 

  18. Lee, M.D., Wagenmakers, E.J.: Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press, Cambridge (2014).

    CrossRef  Google Scholar 

  19. Lee, M.K.: Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5(1), 1–16 (2018).

    CrossRef  Google Scholar 

  20. Lieber, R.: Financial Advice for People Who Aren’t Rich, April 2014.

  21. Mary, J.J., Calauzènes, C., Karoui, N.E.: Fairness-aware learning for continuous attributes and treatments. In: ICML 97, pp. 4382–4391 (2019).

  22. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A Survey on Bias and Fairness in Machine Learning. arXiv Preprint arXiv:1908.09635 (2019)

  23. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)

    Google Scholar 

  24. Mullainathan, S., Noeth, M., Schoar, A.: The market for financial advice: an audit study. SSRN Electron. J. (2012).

    CrossRef  Google Scholar 

  25. Napierala, M.A.: What Is the Bonferroni correction? (2012).

  26. Nickel, P., Spahn, A.: Trust, discourse ethics, and persuasive technology. In: Persuasive Technology: Design for Health and Safety, 7th International Conference Persuasive Technology 2012, pp. 37–40. Linköping University Electronic Press (2012)

    Google Scholar 

  27. Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems—an introductory survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 10(3), 1–14 (2020).

  28. Oinas-Kukkonen, H., Harjumaa, M.: Towards deeper understanding of persuasion in software and information systems. In: Proceedings of 1st International Conference on Advanced Computer Interaction, ACHI 2008 (2008).

  29. Önkal, D., Goodwin, P., Thomson, M., Gönül, S., Pollock, A.: The relative influence of advice from human experts and statistical methods on forecast adjustments. J. Behav. Decis. Mak. 22(4), 390–409 (2009).

    CrossRef  Google Scholar 

  30. Orji, R., Moffatt, K.: Persuasive technology for health and wellness: State-of-the-art and emerging trends. Health Inform. J. 24(1), 66–91 (2018).

    CrossRef  Google Scholar 

  31. Otterbacher, J., Checco, A., Demartini, G., Clough, P.: Investigating user perception of gender bias in image search: the role of sexism. In: 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018, pp. 933–936 (2018).

  32. Promberger, M., Baron, J.: Do patients trust computers? J. Behav. Decis. Mak. 19(5), 455–468 (2006).

    CrossRef  Google Scholar 

  33. Purpura, S., Schwanda, V., Williams, K., Stubler, W., Sengers, P.: Fit4Life: the design of a persuasive technology promoting healthy behavior and ideal weight. In: Proceedings of SIGCHI Conference on Human Factors in Computing Systems, pp. 423–432 (2011)

    Google Scholar 

  34. Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–133 (2019)

    Google Scholar 

  35. Sattarov, F., Nagel, S.: Building trust in persuasive gerontechnology: user-centric and institution-centric approaches. Gerontechnology 18(1), 1–14 (2019).

    CrossRef  Google Scholar 

  36. Smith, J., Sonboli, N., Fiesler, C., Burke, R.: Exploring user opinions of fairness in recommender systems. In: CHI 2020 Workshop Human-Centered Approaches to Fair Responsible AI (2020).

  37. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 272–283 (2020).

  38. Van Den Bergh, D., et al.: A tutorial on conducting and interpreting a Bayesian ANOVA in JASP. Annee Psychol. 120(1), 73–96 (2020).

    CrossRef  Google Scholar 

  39. Varshney, K.R.: Trustworthy machine learning and artificial intelligence. XRDS Crossroads ACM Mag. Students 25(3) (2019).

  40. Varshney, K.R.: On mismatched detection and safe, trustworthy machine learning. In: 2020 54th Annual Conference on Information Sciences and Systems, CISS 2020 (2020).

  41. Verbeek, P.P.: Persuasive technology and moral responsibility toward an ethical framework for persuasive technologies. Persuasive 6, 1–15 (2006)

    Google Scholar 

  42. Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of International Workshop on Software Fairness, FairWare 2018, pp. 1–7. Association for Computing Machinery, New York (2018).

  43. Vigdor, N.: Apple card investigated after gender discrimination complaints. New York Times (2019)

    Google Scholar 

  44. Wagenmakers, E.-J., et al.: Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychon. Bull. Rev. 25(1), 35–57 (2017).

    CrossRef  Google Scholar 

  45. Woodruff, A., Fox, S.E., Rousso-Schindler, S., Warshaw, J.: A qualitative exploration of perceptions of algorithmic fairness. In: Proceedings of the Conference on Human Factors in Computing Systems, vol. 2018-April, pp. 1–14 (2018).

  46. Yang, Q., Banovic, N., Zimmerman, J.: Mapping machine learning advances from HCI research to reveal starting places for design innovation. In: Proceedings of the Conference on Human Factors in Computing Systems, vol. 2018-April, pp. 1–11 (2018).

  47. Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: 26th International World Wide Web Conference on WWW 2017, pp. 1171–1180 (2017).

Download references


This research has been supported by the Think Forward Initiative (a partnership between ING Bank, Deloitte, Dell Technologies, Amazon Web Services, IBM, and the Center for Economic Policy Research – CEPR). The views and opinions expressed in this paper are solely those of the authors and do not necessarily reflect the official policy or position of the Think Forward Initiative or any of its partners.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Tim Draws .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Draws, T., Szlávik, Z., Timmermans, B., Tintarev, N., Varshney, K.R., Hind, M. (2021). Disparate Impact Diminishes Consumer Trust Even for Advantaged Users. In: Ali, R., Lugrin, B., Charles, F. (eds) Persuasive Technology. PERSUASIVE 2021. Lecture Notes in Computer Science(), vol 12684. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-79459-0

  • Online ISBN: 978-3-030-79460-6

  • eBook Packages: Computer ScienceComputer Science (R0)