Skip to main content
Log in

Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users’ perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users’ perceptions of fairness and understanding of the system’s outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users’ fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users’ understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users’ perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to promote users’ fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new explanation style that can be used as the link between the actual (computational) fairness of the system and users’ fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on request.

Code availability

Not applicable.

References

  • Abdollahi, B., & Nasraoui, O. (2018). Transparency in fair machine learning: The case of explainable recommender systems. Human and Machine Learning (pp. 21–35). Springer.

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160.

    Article  Google Scholar 

  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... Chatila, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.

  • Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... Mourad, S. (2019). One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012

  • Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and Machine Learning. fairmlbook. org. Retrieved from http://www.fairmlbook.org

  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April). ‘It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14).

  • Craven, M. W. (1996). Extracting comprehensible models from trained neural networks. University of Wisconsin-Madison Department of Computer Sciences.

  • Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019, March). Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 275–285).

  • Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A survey. 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 0210–0215). IEEE.

  • Eiband, M., Schneider, H., & Buschek, D. (2018). Normative vs. Pragmatic: Two perspectives on the design of explanations in intelligent systems. In: IUI workshops on explainable smart systems (EXSS)

  • Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice. 23rd International conference on intelligent user interfaces (pp. 211–223). ACM.

  • Felfernig, A., Gula, B. (2006). Consumer behavior in the interaction with knowledge-based recommender applications. In: ECAI 2006 workshop on recommender systems, pp. 37–41

  • Fernandez, A., Herrera, F., Cordon, O., del Jesus, M. J., & Marcelloni, F. (2019). Evolutionary fuzzy systems for explainable artificial intelligence: Why, when, what for, and where to? IEEE Computational Intelligence Magazine, 14(1), 69–81.

  • Gedikli, F., Jannach, D., & Ge, M. (2014). How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4), 367–382.

    Article  Google Scholar 

  • Gleicher, M. (2016). A framework for considering comprehensibility in modeling. Big Data, 4(2), 75–88.

    Article  Google Scholar 

  • Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation.” AI Magazine, 38(3), 50–57.

    Article  Google Scholar 

  • Green, B. (2018). “Fair” risk assessments: A precarious approach for criminal justice reform. In: 5th Workshop on fairness, accountability, and transparency in machine learning.

  • Griffin, R. W., Phillips, J., & Gully, S. M. (2017). Organizational behavior: Managing people and organizations.

  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1–42.

    Article  Google Scholar 

  • Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2, 2.

  • Gunning, D., & Aha, D. W. (2019). DARPA’s explainable artificial intelligence program. AI Magazine, 40(2), 44–58.

    Article  Google Scholar 

  • Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The system causability scale (SCS) (pp. 1–6). KI-Künstliche Intelligenz.

  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.

    Google Scholar 

  • Jesus, S., Belém, C., Balayan, V., Bento, J., Saleiro, P., Bizarro, P., & Gama, J. (2021). How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations. arXiv preprint arXiv:2101.08758

  • Kilbertus, N., Gascón, A., Kusner, M. J., Veale, M., Gummadi, K. P., & Weller, A. (2018). Blind justice: Fairness with encrypted sensitive attributes. arXiv preprint arXiv:1806.03281

  • Kim, B., Glassman, E., Johnson, B., & Shah, J. (2015). iBCM: Interactive Bayesian case model empowering humans via intuitive interaction.

  • Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490

  • Loyola-Gonzalez, O. (2019). Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access, 7, 154096–154113.

    Article  Google Scholar 

  • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In: Advances in neural information processing systems (pp. 4765–4774).

  • Montavon, G., Samek, W., & Müller, K. R. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73, 1–15.

    Article  MathSciNet  Google Scholar 

  • Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3–5), 393–444.

    Article  Google Scholar 

  • Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.

    Article  Google Scholar 

  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). ACM.

  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

    Article  Google Scholar 

  • Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296

  • Singh, C., Murdoch, W. J., & Yu, B. (2018). Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337

  • Sinha, R., Swearingen, K. (2002). The role of transparency in recommender systems. In: Conference on Human Factors in Computing Systems, pp. 830–831

  • Tal, A. S., Batsuren, K., Bogina, V., Giunchiglia, F., Hartman, A., Loizou, S. K., Kuflik, T. & Otterbacher, J. (2019) “End to End” towards a framework for reducing biases and promoting transparency of algorithmic systems. In: 2019 14th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), Larnaca, Cyprus, , pp. 1-6. https://doi.org/10.1109/SMAP.2019.8864914

  • Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241.

    Article  Google Scholar 

  • Tintarev, N., & Masthoff, J. (2011). Designing and evaluating explanations for recommender systems. Recommender systems handbook (pp. 479–510). Springer.

  • Van Berkel, N., Goncalves, J., Hettiachchi, D., Wijenayake, S., Kelly, R. M., & Kostakos, V. (2019). Crowdsourcing perceptions of fair predictors for machine learning: A recidivism case study. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–21.

    Article  Google Scholar 

  • Wang, W., & Benbasat, I. (2007). Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems, 23(4), 217–246.

    Article  Google Scholar 

  • Wortham, R. H., Theodorou, A., & Bryson, J. J. (2016, June). What does the robot think? Transparency as a fundamental design requirement for intelligent systems. In: Ijcai-2016 ethics for artificial intelligence workshop.

  • Zhang, J. M., Harman, M., Ma, L., & Liu, Y. (2019). Machine learning testing: Survey, landscapes and horizons. arXiv preprint arXiv:1906.10742

Download references

Acknowledgements

Partial financial support was received from the Cyprus Center for Algorithmic Transparency, which has received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 810105 (CyCAT—Call: H2020-WIDESPREAD-05-2017-Twinning), by a scholarship program for doctoral students in High-Tech professions at the University of Haifa, Israel and by Data Science Research Center (DSRC) at the University of Haifa, Israel.

Funding

Partial financial support was received from the Cyprus Center for Algorithmic Transparency, which has received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 810105 (CyCAT—Call: H2020-WIDESPREAD-05-2017-Twinning), by a scholarship program for doctoral students in High-Tech professions at the University of Haifa, Israel and by Data Science Research Center (DSRC) at the University of Haifa, Israel.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the study conception and design. Material preparation, data collection and analysis were performed by AS-T. The first draft of the manuscript was written by AS-T and all authors commented on previous versions of the manuscript. All authors read, reviewed and commented on interim versions of the paper until the final manuscript was submitted.

Corresponding author

Correspondence to Avital Shulner-Tal.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to declare that are relevant to the content of this article.

Ethical approval

The experiments conducted in this study were approved by the Committee for Ethical Research and the Protection of Human Participants, University of Haifa, Israel (Approval 350/19).

Consent to participate

The following consent to take part in academic research was presented to the participants: “This research is conducted by researchers from the Departments of Information Systems and Economics at the University of Haifa, which deals with the transparency and fairness of algorithmic systems. We request your participation in this online study. It should be emphasized that the answers to the questionnaires will be kept confidential and used only for research purposes. No personal or identifying information is requested or kept. Your participation in this study is voluntary. If you decide at any time that you do not wish to participate, you may do so without penalty. This research is approved by Committee for Ethical Research and the Protection of Human Participants, University of Haifa: (350/19) Thank you in advance for your cooperation.”

Consent for publication

This work has not been published before; it is not under consideration for publication anywhere else; its publication has been approved by all co-authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

System’s description

“CANRA.Inc” is an intelligent DSS that uses AI and ML techniques to predict the likelihood of a candidate succeeding in a new job. The system recommends to recruiters and others in the HR system whether or not to recruit a candidate.

The system receives the candidate’s CV, rating of the university, class rank at the university (student’s performance compared to other students in her/his graduating class), relevant experience, personality test results, recommendation letters from former employers and a brief summary of an internal interview with the company’s interviewer.

The system then produces a recommendation score (Strongly not recommend/Not recommend/Neutral/Recommend/Strongly recommend) for hiring the candidate as well as an explanation letter explaining the output of the system.

Candidate’s description

The data of the following candidate was inserted to the system:

The candidate is an average graduate student (ranked 48th out of 103 students in the class). The candidate worked and did voluntary service while studying. The candidate was appreciated by co-workers in both places.

The internal interviewer’s impression:

  • The candidate has relevant experience for the position.

  • My impression from the candidate’s recommendation letters from former employers is that the candidate fulfills the job responsibilities as required.

  • My impression from the internal interview is that the candidate has good communication skills.

Interviewer’s recommendation: We may consider proceeding with this candidate.

Explanations descriptions

The following explanations were used in the experiment:

Case-based explanation

A similar case (which received the same outcome) is the following candidate: “The candidate was an average performing student with some relevant experience for the job, S/he was positively recommended by her/his co-workers and fulfills her/his job responsibilities as required. The candidate had a similar CV to yours and the personality test results were also similar.”

Certification-based explanation

The system was tested and verified by authorized experts and regulators for fairness towards different population segments guarding against biases and discrimination. It was found to satisfy the required fairness constraints.

Demographic-based explanation

The outputs are distributed in a normal distribution. Furthermore, it is known that:

  • 17% of candidates who are ranked in the top 10% in their graduating class are positively recommended by the system.

  • 36% of candidates with 10 years of relevant experience are negatively recommended by the system.

  • 28% of candidates with good communication skills in the internal interview are negatively recommended by the system.

  • 41% of candidates who were appreciated by former employers are negatively recommended by the system.

Input influence-based explanation

Our predictive model assessed the candidate’s information in order to predict his/her chances of progressing in the recruitment process. The more + signs or—signs, the more positively or negatively that factor impacted the probability of being recommended. Unimportant factors are not indicated. The following features and their impact on the outcome for this particular candidate are:

  • Rating of the university (++).

  • Candidate’s ranking in the university (+).

  • Candidate’s CV (+).

  • Candidate’s personality test results (−).

  • Candidate’s experience (+++).

  • Candidate’s recommendation letters (−−).

  • Internal interviewer’s recommendation (++).

Sensitivity-based explanation

Our predictive model The following changes in the input features will change the outcome of the system:

  • If this candidate were to be ranked in the top 10 percent of her/his graduating class—the likelihood of positive recommendation by the system would be increased by 23%.

  • If this candidate had another year of relevant experience to this job—the likelihood of positive recommendation by the system would be increased by 34%.

  • If this candidate had better communication skills in the internal interview—the likelihood of positive recommendation by the system would be increased by 15%.

  • 12% of candidates who were recommended by the internal interviewer are positively recommended by the system.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shulner-Tal, A., Kuflik, T. & Kliger, D. Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf Technol 24, 2 (2022). https://doi.org/10.1007/s10676-022-09623-4

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-022-09623-4

Keywords

Navigation