Skip to main content

Clustering-Based Subgroup Detection for Automated Fairness Analysis

  • Conference paper
  • First Online:
New Trends in Database and Information Systems (ADBIS 2022)

Abstract

Fairness in Artificial Intelligence is a major requirement for trust in ML-supported decision making. Up to now fairness analysis depends on human interaction – for example the specification of relevant attributes to consider. In this paper we propose a subgroup detection method based on clustering to automate this process. We analyse 10 (sub-)clustering approaches with three fairness metrics on three datasets and identify SLINK as an optimal candidate for subgroup detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  2. 2.

    https://archive.ics.uci.edu/ml/datasets/South+German+Credit+%28UPDATE%29.

  3. 3.

    https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-183.

References

  1. Bellamy, R.K.E., et al.: AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias, Oct 2018. arxiv.org/abs/1810.01943

  2. Cabrera, A.A., Epperson, W., Hohman, F., Kahng, M., Morgenstern, J., Chau, D.H.: FAIRVIS: Visual analytics for discovering intersectional bias in machine learning. In: 2019 IEEE Conference on Visual Analytics Science and Technology (VAST), Oct 2019

    Google Scholar 

  3. Castelnovo, A., Crupi, R., Greco, G., Regoli, D., Penco, I.G., Cosentini, A.C.: A clarification of the nuances in the fairness metrics landscape. Sci. Rep. 12(1), 1–21 (2022)

    Article  Google Scholar 

  4. Foulds, J.R., Islam, R., Keya, K.N., Pan, S.: An Intersectional Definition of Fairness. In: 2020 IEEE 36th International Conference on Data Engineering (ICDE), pp. 1918–1921. IEEE (2020)

    Google Scholar 

  5. Gleicher, M., Barve, A., Yu, X., Heimerl, F.: Boxer: Interactive comparison of classifier results. In: Computer Graphics Forum, vol. 39, pp. 181–193. Wiley Online Library (2020)

    Google Scholar 

  6. Hertweck, C., Heitz, C.: A systematic approach to group fairness in automated decision making. In: 2021 8th Swiss Conference on Data Science (SDS), pp. 1–6. IEEE (2021)

    Google Scholar 

  7. Johnson, B., Brun, Y.: Fairkit-learn: a fairness evaluation and comparison toolkit. In: 44th International Conference on Software Engineering Companion (ICSE 2022 Companion) (2022)

    Google Scholar 

  8. Li, J., Moskovitch, Y., Jagadish, H.: DENOUNCER: detection of unfairness in classifiers. Proc. VLDB Endowm. 14(12), 2719–2722 (2021)

    Article  Google Scholar 

  9. Morina, G., Oliinyk, V., Waton, J., Marusic, I., Georgatzis, K.: Auditing and Achieving Intersectional Fairness in Classification Problems. arXiv preprint arXiv:1911.01468 (2019)

  10. Pastor, E., de Alfaro, L., Baralis, E.: Looking for trouble: analyzing classifier behavior via pattern divergence. In: Proceedings of the 2021 International Conference on Management of Data, pp. 1400–1412 (2021)

    Google Scholar 

  11. Teodorescu, M.H., Morse, L., Awwad, Y., Kane, G.C.: Failures of fairness in automation require a deeper understanding of human-ML augmentation. MIS Q. 45(3) (2021)

    Google Scholar 

  12. Verma, S., Rubin, J.: Fairness definitions explained. In: 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pp. 1–7. IEEE (2018)

    Google Scholar 

  13. Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., Wilson, J.: The what-if tool: Interactive probing of machine learning models. IEEE Trans. Visual Comput. Graphics 26(1), 56–65 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jero Schäfer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schäfer, J., Wiese, L. (2022). Clustering-Based Subgroup Detection for Automated Fairness Analysis. In: Chiusano, S., et al. New Trends in Database and Information Systems. ADBIS 2022. Communications in Computer and Information Science, vol 1652. Springer, Cham. https://doi.org/10.1007/978-3-031-15743-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15743-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15742-4

  • Online ISBN: 978-3-031-15743-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics