Skip to main content

“Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2023 (INTERACT 2023)

Abstract

Current HCI research often focuses on mitigating algorithmic biases. While such algorithmic fairness during model training is worthwhile, we see fit to mitigate human cognitive biases earlier, namely during data entry. We developed a conversational agent with voice-based data entry and visualization to support financial consultations, which are human-human settings with information asymmetries. In a pre-study, we reveal data-entry biases in advisors by a quantitative analysis of 5 advisors consulting 15 clients in total. Our main study evaluates the conversational agent with 12 advisors and 24 clients. A thematic analysis of interviews shows that advisors introduce biases by “feeling” and “forgetting” data. Additionally, the conversational agent makes financial consultations more transparent and automates data entry. These findings may be transferred to various dyads, such as doctor visits. Finally, we stress that AI not only poses a risk of becoming a mirror of human biases but also has the potential to intervene in the early stages of data entry.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Originally, 6 advisors participated in the pre-study. Due to personal time constraints, one advisor could only complete two sessions. We consequently omitted their data from further analysis.

References

  1. López, S.R.: Patient variable biases in clinical judgment: conceptual overview and methodological considerations. Psychol. Bull. 106, 184–203 (1989)

    Article  Google Scholar 

  2. Arnott, D.: Cognitive biases and decision support systems development: a design science approach. Inf. Syst. J. 16, 55–78 (2006)

    Article  Google Scholar 

  3. Zhang, Y., Bellamy, R.K.E., Kellogg, W.A.: Designing information for remediating cognitive biases in decision-making. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2211–2220. ACM, New York, NY, USA (2015)

    Google Scholar 

  4. Buranyi, S.: Rise of the racist robots – how AI is learning all our worst impulses (2017). https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

  5. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–18. ACM, Montreal QC Canada (2018)

    Google Scholar 

  6. Woodruff, A., Fox, S.E., Rousso-Schindler, S., Warshaw, J.: A qualitative exploration of perceptions of algorithmic fairness. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM, Montreal QC Canada (2018)

    Google Scholar 

  7. Lee, M.K., Kim, J.T., Lizarondo, L.: A human-centered approach to algorithmic services: considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3365–3376. ACM, New York, NY, USA (2017)

    Google Scholar 

  8. Lee, M.K., Kusbit, D., Metsky, E., Dabbish, L.: Working with machines: the impact of algorithmic and data-driven management on human workers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1603–1612. ACM, New York, NY, USA (2015)

    Google Scholar 

  9. Echterhoff, J.M., Yarmand, M., McAuley, J.: AI-moderated decision-making: capturing and balancing anchoring bias in sequential decision tasks. In: CHI Conference on Human Factors in Computing Systems, pp. 1–9. ACM, New Orleans LA USA (2022)

    Google Scholar 

  10. Dennis, A., Newman, W.: Supporting doctor-patient interaction: using a surrogate application as a basis for evaluation. In: Conference Companion on Human Factors in Computing Systems, pp. 223–224. ACM, New York, NY, USA (1996)

    Google Scholar 

  11. Eckhardt, S., Sprenkamp, K., Zavolokina, L., Bauer, I., Schwabe, G.: Can artificial intelligence help used-car dealers survive in a data-driven used-car market? In: Drechsler, A., Gerber, A., Hevner, A. (eds.) The Transdisciplinary Reach of Design Science Research, pp. 115–127. Springer International Publishing, Cham (2022)

    Chapter  Google Scholar 

  12. Hoffman, K.M., Trawalter, S., Axt, J.R., Oliver, M.N.: Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proc Natl Acad Sci U S A. 113, 4296–4301 (2016)

    Article  Google Scholar 

  13. Zheng, Q., Tang, Y., Liu, Y., Liu, W., Huang, Y.: UX research on conversational human-AI interaction: a literature review of the ACM digital library. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–24. ACM, New York, NY, USA (2022)

    Google Scholar 

  14. Dolata, M., Agotai, D., Schubiger, S., Schwabe, G.: Pen-and-paper rituals in service interaction: combining high-touch and high-tech in financial advisory encounters. Proc. ACM Hum.-Comput. Interact. 3, 224:1–224:24 (2019)

    Google Scholar 

  15. Dolata, M., Agotai, D., Schubiger, S., Schwabe, G.: Advisory service support that works: enhancing service quality with a mixed-reality system. Proc. ACM Hum.-Comput. Interact. 4, 120:1–120:22 (2020)

    Google Scholar 

  16. Heinrich, P., Kilic, M., Aschoff, F.-R., Schwabe, G.: Enabling relationship building in tabletop-supported advisory settings. In: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 171–183. ACM, Baltimore Maryland USA (2014)

    Google Scholar 

  17. Kilic, M., Heinrich, P., Schwabe, G.: Coercing into completeness in financial advisory service encounters. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 1324–1335. ACM, Vancouver BC Canada (2015)

    Google Scholar 

  18. Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635. ACM, New York, NY, USA (2021)

    Google Scholar 

  19. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum Comput Stud. 146, 102551 (2021)

    Article  Google Scholar 

  20. Eschenbach, W.J.: Transparency and the black box problem: why we do not trust aI. Philosophy & Technol. 34(4), 1607–1622 (2021)

    Article  Google Scholar 

  21. Lee, J.D., See, K.A.: Trust in Automation: Designing for Appropriate Reliance. Human Factors 31 (2004)

    Google Scholar 

  22. Lai, V., Chen, C., Liao, Q.V., Smith-Renner, A., Tan, C.: Towards a Science of Human-Ai Decision Making: a Survey of Empirical Studies. arXiv preprint arXiv:2112.11471 (2021)

  23. Benda, N.C., Novak, L.L., Reale, C., Ancker, J.S.: Trust in AI: why we should be designing for appropriate reliance. J. Am. Med. Inform. Assoc. 29, 207–212 (2021)

    Article  Google Scholar 

  24. Schemmer, M., Kuehl, N., Benz, C., Bartos, A., Satzger, G.: Appropriate reliance on ai advice: conceptualization and the effect of explanations. In: Proceedings of the 28th International Conference on Intelligent User Interfaces, pp. 410–422. ACM, New York, NY, USA (2023)

    Google Scholar 

  25. Yang, F., Huang, Z., Scholtz, J., Arendt, D.L.: How do visual explanations foster end users’ appropriate trust in machine learning? Presented at the Proceedings of the 25th International Conference on Intelligent User Interfaces (2020)

    Google Scholar 

  26. Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13. ACM, New York, NY, USA (2019)

    Google Scholar 

  27. Kallus, N., Zhou, A.: Residual unfairness in fair machine learning from prejudiced data. In: Proceedings of the 35th International Conference on Machine Learning, pp. 2439–2448. PMLR (2018)

    Google Scholar 

  28. Dolata, M., Feuerriegel, S., Schwabe, G.: A sociotechnical view of algorithmic fairness. Inf. Syst. J. 32, 754–818 (2022)

    Article  Google Scholar 

  29. Ariely, D.D.: Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions. Harper Perennial, New York, NY (2010)

    Google Scholar 

  30. Martin, C.L., Adams, S.: Behavioral biases in the service encounter: empowerment by default? Mark. Intell. Plan. 17, 192–201 (1999)

    Article  Google Scholar 

  31. Moseley, A., Thomann, E.: A behavioural model of heuristics and biases in frontline policy implementation. Policy Polit. 49, 49–67 (2021)

    Article  Google Scholar 

  32. Gibbons, L., Stoddart, K.: “Fast and frugal heuristics”: clinical decision making in the emergency department. Int. Emerg. Nurs. 41, 7–12 (2018)

    Article  Google Scholar 

  33. Mendel, R., et al.: Confirmation bias: why psychiatrists stick to wrong preliminary diagnoses. Psychol. Med. 41, 2651–2659 (2011)

    Article  Google Scholar 

  34. Baker, H.K., Filbeck, G., Ricciardi, V.: How Behavioural Biases Affect Finance Professionals (2017). https://papers.ssrn.com/abstract=2899214

  35. Friedline, T., Oh, S., Klemm, T., Kugiya, J.: Exclusion and Marginalization in Financial Services: Frontline Employees as Street-Level Bureaucrats (2020)

    Google Scholar 

  36. Golec, J.H.: Empirical tests of a principal-agent model of the investor-investment advisor relationship. J. Financial Quantitative Analysis 27, 81–95 (1992)

    Article  Google Scholar 

  37. Schwabe, G., Nussbaumer, P.: Why IT is not being used for financial advisory. Presented at the 17th European Conference on Information Systems (ECIS 2009) , Verona June 10 (2009)

    Google Scholar 

  38. Eisenhardt, K.M.: Agency theory: an assessment and review. Acad. Manag. Rev. 14, 57–74 (1989)

    Article  Google Scholar 

  39. Hayashi, Y., Wakabayashi, K.: Can AI become reliable source to support human decision making in a court scene? In: Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 195–198. ACM, New York, NY, USA (2017)

    Google Scholar 

  40. Pinder, C., Fleck, R., Segundo Díaz, R.L., Beale, R., Hendley, R.J.: Accept the banana: exploring incidental cognitive bias modification techniques on smartphones. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. pp. 2923–2931. ACM, New York, NY, USA (2016)

    Google Scholar 

  41. Pinder, C., Vermeulen, J., Cowan, B.R., Beale, R.: Digital behaviour change interventions to break and form habits. ACM Trans. Computer-Human Interaction (TOCHI). 25, 1–66 (2018)

    Article  Google Scholar 

  42. Cai, C.J., et al.: Human-centered tools for coping with imperfect algorithms during medical decision-making. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM, Glasgow Scotland Uk (2019)

    Google Scholar 

  43. Rädsch, T., Eckhardt, S., Leiser, F., Pandl, K.D., Thiebes, S., Sunyaev, A.: What Your Radiologist Might be Missing: Using Machine Learning to Identify Mislabeled Instances of X-ray Images (2021)

    Google Scholar 

  44. Calisto, F.M., Santiago, C., Nunes, N., Nascimento, J.C.: Introduction of human-centric AI assistant to aid radiologists for multimodal breast image classification. Int. J. Hum Comput Stud. 150, 102607 (2021)

    Article  Google Scholar 

  45. Wintersberger, P., et al.: Designing for continuous interaction with artificial intelligence systems. In: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–4. ACM, New York, NY, USA (2022)

    Google Scholar 

  46. Calisto, F.M., Nunes, N., Nascimento, J.C.: BreastScreening: on the use of multi-modality in medical imaging diagnosis. In: Proceedings of the International Conference on Advanced Visual Interfaces, pp. 1–5. ACM, New York, NY, USA (2020)

    Google Scholar 

  47. Ma, N.F., Rivera, V.A., Yao, Z., Yoon, D.: “Brush it off”: how women workers manage and cope with bias and harassment in gender-agnostic gig platforms. In: CHI Conference on Human Factors in Computing Systems, pp. 1–13. ACM, New Orleans LA USA (2022)

    Google Scholar 

  48. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15. ACM, New York, NY, USA (2019)

    Google Scholar 

  49. O’Leary, T.K., Parmar, D., Olafsson, S., Paasche-Orlow, M., Bickmore, T., Parker, A.G.: Community dynamics in technospiritual interventions: lessons learned from a church-based mhealth pilot. In: CHI Conference on Human Factors in Computing Systems, pp. 1–23. ACM, New Orleans LA USA (2022)

    Google Scholar 

  50. Beneteau, E., Boone, A., Wu, Y., Kientz, J.A., Yip, J., Hiniker, A.: Parenting with alexa: exploring the introduction of smart speakers on family dynamics. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–13. ACM, Honolulu HI USA (2020)

    Google Scholar 

  51. Porcheron, M., Fischer, J.E., Reeves, S., Sharples, S.: Voice interfaces in everyday life. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–12. ACM, Montreal QC Canada (2018)

    Google Scholar 

  52. Zubatiy, T., Vickers, K.L., Mathur, N., Mynatt, E.D.: Empowering dyads of older adults with mild cognitive impairment and their care partners using conversational agents. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–15. ACM, Yokohama Japan (2021)

    Google Scholar 

  53. Heyman, S., Artman, H.: Computer support for financial advisors and their clients: co-creating an investment plan. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 1313–1323. ACM, New York, NY, USA (2015)

    Google Scholar 

  54. Azure Kinect DK – Develop AI Models | Microsoft Azure. https://azure.microsoft.com/en-us/services/kinect-dk/. Accessed 20 Aug 2022

  55. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101 (2006)

    Article  Google Scholar 

  56. Castel, A.D., Rhodes, M.G., McCabe, D.P., Soderstrom, N.C., Loaiza, V.M.: Rapid communication: The fate of being forgotten: information that is initially forgotten is judged as less important. Quarterly J. Experimental Psychol. 65, 2281–2287 (2012)

    Article  Google Scholar 

  57. Miller, G.A.: The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 101, 343–352 (1994)

    Article  Google Scholar 

  58. Stafford, L., Burggraf, C.S., Sharkey, W.F.: Conversational memory the effects of time, recall, mode, and memory expectancies on remembrances of natural conversations. Human Comm Res. 14, 203–229 (1987)

    Article  Google Scholar 

  59. Rubio-Fernández, P., Mollica, F., Ali, M.O., Gibson, E.: How do you know that? automatic belief inferences in passing conversation. Cognition 193, 104011 (2019)

    Article  Google Scholar 

  60. Martin, K., Bickle, K., Lok, J.: Investigating the impact of cognitive bias in nursing documentation on decision-making and judgement. Int J Mental Health Nurs. 31, 897–907 (2022)

    Article  Google Scholar 

  61. Piolat, A., Olive, T., Kellogg, R.T.: Cognitive effort during note taking. Appl. Cognit. Psychol. 19, 291–312 (2005)

    Article  Google Scholar 

  62. Nadj, M., Knaeble, M., Li, M.X., Maedche, A.: Power to the oracle? design principles for interactive labeling systems in machine learning. Künstl Intell. 34, 131–142 (2020)

    Article  Google Scholar 

  63. Knaeble, M., Nadj, M., Germann, L., Maedche, A.: Tools of Trade of the Next Blue-Collar Job? Antecedents, Design Features, and Outcomes of Interactive Labeling Systems. ECIS 2023 Research Papers (2023)

    Google Scholar 

  64. Zhang, X., Su, Z., Rekimoto, J.: Aware: intuitive device activation using prosody for natural voice interactions. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–16. ACM, New York, NY, USA (2022)

    Google Scholar 

  65. Kim, G., Kim, H.C.: Designing of multimodal feedback for enhanced multitasking performance. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3113–3122. ACM, New York, NY, USA (2011)

    Google Scholar 

  66. Ehsan, U., Liao, Q.V., Muller, M., Riedl, M.O., Weisz, J.D.: Expanding explainability: towards social transparency in AI systems. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–19. ACM, New York, NY, USA (2021)

    Google Scholar 

  67. Nussbaumer, P., Matter, I., Schwabe, G.: “Enforced” vs. “casual” transparency -- findings from IT-supported financial advisory encounters. ACM Trans. Manage. Inf. Syst. 3, 11:1–11:19 (2012)

    Google Scholar 

  68. Putze, F., Amma, C., Schultz, T.: Design and evaluation of a self-correcting gesture interface based on error potentials from EEG. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3375–3384. ACM, Seoul Republic of Korea (2015)

    Google Scholar 

  69. Roth, F.: The effect of the financial crisis on systemic trust. Intereconomics 44, 203–208 (2009)

    Article  Google Scholar 

  70. Fein, M.L.: Robo-Advisors: A Closer Look (2015). https://papers.ssrn.com/abstract=2658701

  71. Park, J.Y., Ryu, J.P., Shin, H.J.: Robo advisors for portfolio management. Advanced Science and Technology Lett. 141, 104–108 (2016)

    Article  Google Scholar 

  72. SEC.gov | Investor Alert: Automated Investment Tools. https://www.sec.gov/investment/investor-alerts-and-bulletins/autolistingtoolshtm. Accessed 21 Aug 2022

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sven Eckhardt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Eckhardt, S. et al. (2023). “Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14144. Springer, Cham. https://doi.org/10.1007/978-3-031-42286-7_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42286-7_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42285-0

  • Online ISBN: 978-3-031-42286-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics