Skip to main content

Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias

Part of the Lecture Notes in Computer Science book series (LNISA,volume 12279)


Transparent Machine Learning (ML) is often argued to increase trust into predictions of algorithms however the growth of new interpretability approaches is not accompanied by a growth in studies investigating how interaction of humans and Artificial Intelligence (AI) systems benefits from transparency. The right level of transparency can increase trust in an AI system, while inappropriate levels of transparency can lead to algorithmic bias. In this study we demonstrate that depending on certain personality traits, humans exhibit different susceptibilities for algorithmic bias. Our main finding is that susceptibility to algorithmic bias significantly depends on annotators’ affinity to risk. These findings help to shed light on the previously underrepresented role of human personality in human-AI interaction. We believe that taking these aspects into account when building transparent AI systems can help to ensure more responsible usage of AI systems.


  • Transparent AI
  • Machine learning
  • HCI
  • Risk affinity

P. Schmidt and F. Biessmann—Equal contribution.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-57321-8_24
  • Chapter length: 19 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-57321-8
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   149.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.


  1. 1.

  2. 2.


  1. Arrow, K.: Aspects of the theory of risk-bearing. Yrjö Jahnsson lectures, Yrjö Jahnssonin Säätiö (1965).

  2. Aumann, R.J.: Agreeing to disagree. Ann. Stat. 4, 1236–1239 (1976)

    MathSciNet  CrossRef  Google Scholar 

  3. Bell, D.E., Raiffa, H., Tversky, A.: Decision Making: Descriptive, Normative, and Prescriptive Interactions. Cambridge university Press, Cambridge (1988)

    CrossRef  Google Scholar 

  4. Camerer, C., Weber, M.: Recent developments in modeling preferences: uncertainty and ambiguity. J. Risk Uncertainty (1992).

    CrossRef  MATH  Google Scholar 

  5. Cook, R.D.: Detection of influential observation in linear regression. Technometrics 19(1), 15–18 (1977).

  6. Curley, S.P., Yates, J.F., Abrams, R.A.: Psychological sources of ambiguity avoidance. Organ. Behav. Hum. Decis. Processes 38(2), 230–256 (1986)

    CrossRef  Google Scholar 

  7. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114–126 (2015).

    CrossRef  Google Scholar 

  8. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  9. Einhorn, H.J., Hogarth, R.M.: Decision making under ambiguity. J. Bus. 59, S225–S250 (1986)

    CrossRef  Google Scholar 

  10. Ellsberg, D.: Risk, ambiguity, and the savage axioms. Q. J. Econ. 75, 643–669 (1961)

    MathSciNet  CrossRef  Google Scholar 

  11. FeldmanHall, O., Glimcher, P., Baker, A.L., Phelps, E.A.: Emotion and decision-making under uncertainty: physiological arousal predicts increased gambling during ambiguity but not risk. J. Exp. Psychol. Gen. 145(10), 1255 (2016)

    CrossRef  Google Scholar 

  12. Gilboa, I., Schmeidler, D.: Maxmin expected utility with non-unique prior. In: Uncertainty in Economic Theory, pp. 141–151. Routledge (2004)

    Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. CoRR abs/1412.6572 (2014).

  14. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018).

  15. Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A.: Robust Statistics: The Approach Based on Influence Functions, vol. 196. Wiley, Hoboken (2011)

    MATH  Google Scholar 

  16. Haufe, S., et al.: On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96–110 (2014)

    CrossRef  Google Scholar 

  17. Herman, B.: The promise and peril of human evaluation for model interpretability. CoRR abs/1711.07414 (2017)

    Google Scholar 

  18. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011).,

  19. Kahneman, D., Tversky, A.: Choices, values, and frames. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 269–278. World Scientific (2013)

    Google Scholar 

  20. Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. In: Handbook of the Fundamentals of Financial Decision Making: Part I, pp. 99–127. World Scientific (2013)

    Google Scholar 

  21. Kindermans, P., Schütt, K.T., Alber, M., Müller, K., Dähne, S.: Patternnet and patternlrp - improving the interpretability of neural networks. CoRR abs/1705.05598 (2017).

  22. Klibanoff, P., Marinacci, M., Mukerji, S.: A smooth model of decision making under ambiguity. Econometrica 73(6), 1849–1892 (2005)

    MathSciNet  CrossRef  Google Scholar 

  23. Knight, F.H.: Risk, Uncertainty and Profit. Courier Corporation, North Chelmsford (2012)

    Google Scholar 

  24. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Precup, D., Teh, Y.W. (eds.) ICML. vol. 70, pp. 1885–1894 (2017).

  25. Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection (2019).

  26. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1675–1684. ACM (2016)

    Google Scholar 

  27. Levy, I., Snell, J., Nelson, A.J., Rustichini, A., Glimcher, P.W.: Neural representation of subjective value under risk and ambiguity. J. Neurophysiol. 103(2), 1036–1047 (2009)

    CrossRef  Google Scholar 

  28. Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

  29. Lipton, Z.C.: The doctor just won’t accept that! arXiv preprint arXiv:1711.08037 (2017)

  30. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NIPS, pp. 4768–4777 (2017)

    Google Scholar 

  31. Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: ACL, pp. 142–150 (2011).

  32. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269 (2017)

  33. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.: Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recogn. 65, 211–222 (2017).

  34. Pratt, J.W.: Risk aversion in the small and in the large. In: Uncertainty in Economics, pp. 59–79. Elsevier (1978)

    Google Scholar 

  35. Pulford, B.D.: Short article: is luck on my side? optimism, pessimism, and ambiguity aversion. Q. J. Exp. Psychol. 62(6), 1079–1087 (2009)

    CrossRef  Google Scholar 

  36. Rahwan, I., et al.: Machine behaviour. Nature 12(11), 26.

  37. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: explaining the predictions of any classifier. In: SIGKDD, pp. 1135–1144 (2016)

    Google Scholar 

  38. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  39. Rieger, M.O., Wang, M.: Cumulative prospect theory and the St. Petersburg paradox. Econ. Theory 28(3), 665–679 (2006)

    MathSciNet  CrossRef  Google Scholar 

  40. Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learning Syst. 28(11), 2660–2673 (2017).

  41. Savage, L.J.: The Foundations of Statistics. Courier Corporation, North Chelmsford (1972)

    MATH  Google Scholar 

  42. Schmeidler, D.: Subjective probability and expected utility without additivity. Econometrica J. Econometric Soc. 57, 571–587 (1989)

    MathSciNet  CrossRef  Google Scholar 

  43. Schmidt, P., Bießmann, F.: Quantifying interpretability and trust in machine learning systems. vol. abs/1901.08558 (2019).

  44. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR abs/1312.6034 (2013).

  45. Slovic, P., Tversky, A.: Who accepts savage’s axiom? Behav. Sci. 19(6), 368–373 (1974)

    CrossRef  Google Scholar 

  46. Stanley Budner, N.: Intolerance of ambiguity as a personality variable 1. J. Pers. 30(1), 29–50 (1962)

    CrossRef  Google Scholar 

  47. Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010).

  48. Tversky, A., Kahneman, D.: Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 5(4), 297–323 (1992)

    CrossRef  Google Scholar 

  49. Tymula, A., et al.: Adolescents’ risk-taking behavior is driven by tolerance to ambiguity. Proc. National Acad. Sci. (2012).

  50. Vives, M.L., Feldmanhall, O.: Tolerance to ambiguous uncertainty predicts prosocial behavior. Nat. Commun. (2018).

    CrossRef  Google Scholar 

  51. Von Neumann, J., Morgenstern, O., Kuhn, H.W.: Theory of Games and Economic Behavior (Commemorative Edition). Princeton University Press, Princeton (2007)

    Google Scholar 

  52. Wally, S., Baum, J.R.: Personal and structural determinants of the pace of strategic decision making. Acad. Manag. J. 37(4), 932–956 (1994)

    Google Scholar 

  53. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV, pp. 818–833 (2014)

    Google Scholar 

  54. Zien, A., Krämer, N., Sonnenburg, S., Rätsch, G.: The feature importance ranking measure. In: Buntine, W., Grobelnik, M., Mladenić, D., Shawe-Taylor, J. (eds.) ECML PKDD 2009. LNCS (LNAI), vol. 5782, pp. 694–709. Springer, Heidelberg (2009).

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Philipp Schmidt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 IFIP International Federation for Information Processing

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Schmidt, P., Biessmann, F. (2020). Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science(), vol 12279. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-57320-1

  • Online ISBN: 978-3-030-57321-8

  • eBook Packages: Computer ScienceComputer Science (R0)