Skip to main content

Making ML Models Fairer Through Explanations: The Case of LimeOut

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12602))

Abstract

Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased. This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole. Not only unfair outcomes affect human rights, they also undermine public trust in ML and AI. In this paper we address fairness issues of ML models based on decision outcomes, and we show how the simple idea of “feature dropout” followed by an “ensemble approach” can improve model fairness. To illustrate, we will revisit the case of “LimeOut” that was proposed to tackle “process fairness”, which measures a model’s reliance on sensitive or discriminatory features. Given a classifier, a dataset and a set of sensitive features, LimeOut first assesses whether the classifier is fair by checking its reliance on sensitive features using “Lime explanations”. If deemed unfair, LimeOut then applies feature dropout to obtain a pool of classifiers. These are then combined into an ensemble classifier that was empirically shown to be less dependent on sensitive features without compromising the classifier’s accuracy. We present different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut’s classifiers improve (or at least maintain) not only process fairness but also other fairness metrics such as individual and group fairness, equal opportunity, and demographic parity.

This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215, and the Inria Project Lab “Hybrid Approaches for Interpretable AI” (HyAIAI).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    It is also referred to as disparate treatment or predictive parity.

  2. 2.

    It is also referred to as group fairness.

  3. 3.

    It is also referred to as disparate mistreatment.

  4. 4.

    It is also referred to as procedural fairness.

  5. 5.

    In [2] k was set to 10.

  6. 6.

    http://archive.ics.uci.edu/ml/datasets/Adult.

  7. 7.

    https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data).

  8. 8.

    https://www.consumerfinance.gov/data-research/hmda/.

  9. 9.

    https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients.

  10. 10.

    http://www.seaphe.org/databases.php.

  11. 11.

    The gitlab repository of LimeOut can be found here:

    https://gitlab.inria.fr/orpailleur/limeout.

  12. 12.

    https://machinelearningmastery.com/threshold-moving-for-imbalanced-classification/.

  13. 13.

    We used version 0.23.1 of Scikit-learn.

  14. 14.

    https://github.com/Trusted-AI/AIF360.

References

  1. Bellamy, R.K.E., et al.: AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. ArXiv abs/1810.01943 (2018)

    Google Scholar 

  2. Bhargava, V., Couceiro, M., Napoli, A.: LimeOut: an ensemble approach to improve process fairness. In: Koprinska, I., et al. (eds.) ECML PKDD 2020 Workshops. ECML PKDD 2020. Communications in Computer and Information Science, vol. 1323, pp. 475–491. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65965-3_32

  3. Binns, R.: On the apparent conflict between individual and group fairness. In: FAT 2020, pp. 514–524 (2020)

    Google Scholar 

  4. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)

    Article  Google Scholar 

  5. Cynthia, D., et al.: Fairness through awareness. In: Innovations in Theoretical Computer Science, pp. 214–226. ACM (2012)

    Google Scholar 

  6. Dimanov, B., et al.: You shouldn’t trust me: learning models which conceal unfairness from multiple explanation methods. In: ECAI 2020, pp. 2473–2480 (2020)

    Google Scholar 

  7. Grgić-Hlača, N., et al.: Beyond distributive fairness in algorithmic decision making: feature selection for procedurally fair learning. In: AAAI 2018, pp. 51–60 (2018)

    Google Scholar 

  8. Grgic-Hlaca, N., et al.: The case for process fairness in learning: feature selection for fair decision making. In: NIPS Symposium on Machine Learning and the Law (2016)

    Google Scholar 

  9. Hardt, M., et al.: Equality of opportunity in supervised learning. In: NIPS 2016 (2016)

    Google Scholar 

  10. van der Linden, I., Haned, H., Kanoulas, E.: Global aggregations of localexplanations for black box models. ArXiv abs/1907.03039 (2019)

    Google Scholar 

  11. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: NIPS 2017, pp. 4765–4774 (2017)

    Google Scholar 

  12. Pedregosa, F., et al.: Scikit-learn: machine learning in python. JMLR 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  13. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI 2018, pp. 1527–1535 (2018)

    Google Scholar 

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier. In: SIGKDD 2016, pp. 1135–1144 (2016)

    Google Scholar 

  15. Speicher, T., et al.: A unified approach to quantifying algorithmic unfairness: measuring individual & group unfairness via inequality indices. In: SIGKDD 2018, pp. 2239–2248 (2018)

    Google Scholar 

  16. Zafar, M.B., et al.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: WWW 2017, pp. 1171–1180 (2017)

    Google Scholar 

  17. Zafar, M.B.E.: Fairness constraints: mechanisms for fair classification. In: AISTATS 2017, pp. 962–970 (2017)

    Google Scholar 

  18. Zemel, R.E.: Learning fair representations. In: ICML 2013, pp. 325–333 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Miguel Couceiro .

Editor information

Editors and Affiliations

A Appendix

A Appendix

Fig. 5.
figure 5

Fairness metrics for the HMDA dataset (first and second lines) and the Default dataset (third and fourth lines). For both datasets, lesser original models were deemed unfair, namely, ADA, Bagging and RF on HMDA, and ADA and Bagging on Default. Even though these models were deemed unfair by LimeOut, most of the fairness metrics actually indicate a rather fair behaviour by the original and LimeOut ’s ensemble models.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alves, G., Bhargava, V., Couceiro, M., Napoli, A. (2021). Making ML Models Fairer Through Explanations: The Case of LimeOut. In: van der Aalst, W.M.P., et al. Analysis of Images, Social Networks and Texts. AIST 2020. Lecture Notes in Computer Science(), vol 12602. Springer, Cham. https://doi.org/10.1007/978-3-030-72610-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72610-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72609-6

  • Online ISBN: 978-3-030-72610-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics