Skip to main content

Explainable Artificial Intelligence (XAI): How the Visualization of AI Predictions Affects User Cognitive Load and Confidence

  • Conference paper
  • First Online:
Information Systems and Neuroscience (NeuroIS 2021)

Part of the book series: Lecture Notes in Information Systems and Organisation ((LNISO,volume 52))

Included in the following conference series:

Abstract

Explainable Artificial Intelligence (XAI) aims to bring transparency to AI systems by translating, simplifying, and visualizing its decisions. While society remains skeptical about AI systems, studies show that transparent and explainable AI systems result in improved confidence between humans and AI. We present preliminary results from a study designed to assess two presentation-order methods and three AI decision visualization attribution models to determine each visualization’s impact upon a user’s cognitive load and confidence in the system by asking participants to complete a visual decision-making task. The results show that both the presentation order and the morphological clarity impact cognitive load. Furthermore, a negative correlation was revealed between cognitive load and confidence in the AI system. Our findings have implications for future AI systems design, which may facilitate better collaboration between humans and AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The Xception (extreme inception) [30] algorithm which comes with pre-trained weights on the ImageNet dataset was used to classify the images.

References

  1. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89, October 2018. https://doi.org/10.1109/DSAA.2018.00018

  2. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI 2018, vol. 2018-April, pp. 1–18 (2018). https://doi.org/10.1145/3173574.3174156

  3. Vessey, I.: Cognitive fit : a theory-based analysis of the graphs versus tables literature. Decis. Sci. 22(2), 219–240 (1991). http://dx.doi.org/10.1016/j.jaci.2012.05.050

  4. Chen, C.-W.: Five-star or thumbs-up? The influence of rating system types on users’ perceptions of information quality, cognitive effort, enjoyment and continuance intention. Internet Res. (2017)

    Google Scholar 

  5. Bizarro, P.A.: Effect of different database structure representations, query languages, and task characteristics on information retrieval. J. Manag. Inf. Decis. Sci. 18(1) (2015)

    Google Scholar 

  6. Adipat, B., Zhang, D., Zhou, L.: The effects of tree-view based presentation adaptation on mobile web browsing. MIS Q. 35(1), 99 (2011). https://doi.org/10.2307/23043491

    Article  Google Scholar 

  7. Brunelle, E.: The moderating role of cognitive fit in consumer channel preference. J. Electron. Commer. Res. 10(3) (2009)

    Google Scholar 

  8. Goodhue, D.L., Thompson, R.L.: Task-technology fit and individual performance. MIS Q. 213–236 (1995)

    Google Scholar 

  9. Vessey, I., Galletta, D.: Cognitive fit: an empirical study of information acquisition. Inf. Syst. Res. 2(1), 63–84 (1991)

    Article  Google Scholar 

  10. Nuamah, J.K., Seong, Y., Jiang, S., Park, E., Mountjoy, D.: Evaluating effectiveness of information visualizations using cognitive fit theory: a neuroergonomics approach. Appl. Ergon. 88(June 2019), 103173 (2020). https://doi.org/10.1016/j.apergo.2020.103173

    Article  Google Scholar 

  11. Wickens, C.D.: Multiple resources and mental workload. Hum. Factors 50(3), 449–455 (2008). https://doi.org/10.1518/001872008X288394

    Article  Google Scholar 

  12. Palinko, O., Kun, A.L., Shyrokov, A., Heeman, P.: Estimating cognitive load using remote eye tracking in a driving simulator. In: Eye-Tracking Research & Applications Symposium, no. April 2017, pp. 141–144 (2010). https://doi.org/10.1145/1743666.1743701

  13. Dennis, A.R., Carte, T.A.: Using geographical information systems for decision making: extending cognitive fit theory to map-based presentations. Inf. Syst. Res. 9(2), 194–203 (1998). https://doi.org/10.1287/isre.9.2.194

    Article  Google Scholar 

  14. Sundararajan, M., Xu, S., Taly, A., Sayres, R., Najmi, A.: Exploring principled visualizations for deep network attributions. In: IUI Workshops, vol. 4 (2019)

    Google Scholar 

  15. Bigras, É., Léger, P.-M., Sénécal, S.: Recommendation agent adoption: how recommendation presentation influences employees’ perceptions, behaviors, and decision quality. Appl. Sci. 9(20) (2019). https://doi.org/10.3390/app9204244.

  16. Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14(2), 627–660 (2020). https://doi.org/10.5465/annals.2018.0057

    Article  Google Scholar 

  17. Cofta, P.: Designing for trust. In: Handbook of Research on Socio-Technical Design and Social Networking Systems, vol. 731, no. 9985433, pp. 388–401. IGI Global (2009)

    Google Scholar 

  18. Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Conference on Human Factors in Computing Systems – Proceedings (2019). https://doi.org/10.1145/3290607.3312787

  19. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 46(1), 50–80 (2004). https://doi.org/10.1518/hfes.46.1.50_30392

    Article  Google Scholar 

  20. Meske, C., Bunde, E.: Transparency and trust in human-AI-interaction: the role of model-agnostic explanations in computer vision-based decision support. In: Degen, H., Reinerman-Jones, L. (eds.) HCII 2020. LNCS, vol. 12217, pp. 54–69. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50334-5_4

    Chapter  Google Scholar 

  21. DeCamp, M., Tilburt, J.C.: Why we cannot trust artificial intelligence in medicine. Lancet Digit. Heal. 1(8), e390 (2019)

    Article  Google Scholar 

  22. Wanner, J., Herm, L.-V., Heinrich, K., Janiesch, C., Zschech, P.: White, grey, black: effects of XAI augmentation on the confidence in AI-based decision support systems. In: Proceedings of Forty-First International Conference on Information Systems, pp. 0–9 (2020)

    Google Scholar 

  23. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 5109–5118, March 2017. http://arxiv.org/abs/1703.01365

  24. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision 128(2), 336–359 (2019). https://doi.org/10.1007/s11263-019-01228-7

    Article  Google Scholar 

  25. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, June 2009. https://doi.org/10.1109/CVPRW.2009.5206848

  26. Snodgrass, J.G., Vanderwart, M.: A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. J. Exp. Psychol. Hum. Learn. Mem. 6(2), 174–215 (1980). https://doi.org/10.1037/0278-7393.6.2.174

    Article  Google Scholar 

  27. Beatty, J.: Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 91(2), 276–292 (1982). https://doi.org/10.1037/0033-2909.91.2.276

    Article  Google Scholar 

  28. Attard-Johnson, J., Ó Ciardha, C., Bindemann, M.: Comparing methods for the analysis of pupillary response. Behav. Res. Methods 51(1), 83–95 (2018). https://doi.org/10.3758/s13428-018-1108-6

    Article  Google Scholar 

  29. Tomsett, R., et al.: Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns 1(4), 100049 (2020). https://doi.org/10.1016/j.patter.2020.100049

    Article  Google Scholar 

  30. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2017-Janua, pp. 1800–1807, July 2017. https://doi.org/10.1109/CVPR.2017.195

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antoine Hudon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hudon, A., Demazure, T., Karran, A., Léger, PM., Sénécal, S. (2021). Explainable Artificial Intelligence (XAI): How the Visualization of AI Predictions Affects User Cognitive Load and Confidence. In: Davis, F.D., Riedl, R., vom Brocke, J., Léger, PM., Randolph, A.B., Müller-Putz, G. (eds) Information Systems and Neuroscience. NeuroIS 2021. Lecture Notes in Information Systems and Organisation, vol 52. Springer, Cham. https://doi.org/10.1007/978-3-030-88900-5_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88900-5_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88899-2

  • Online ISBN: 978-3-030-88900-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics