Skip to main content
Log in

Parity-based cumulative fairness-aware boosting

Knowledge and Information Systems Aims and scope Submit manuscript

Cite this article

Abstract

Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race. One cause for this is the encoded societal biases in the training data (e.g., under-representation of females in the tech workforce), which is aggravated in the presence of unbalanced class distributions (e.g., when “hired” is the minority class in a hiring application). State-of-the-art fairness-aware machine learning approaches focus on preserving the overall classification accuracy while mitigating discrimination. In the presence of class-imbalance, such methods may further aggravate the problem of discrimination by denying an already underrepresented group (e.g., females) the fundamental rights of equal social privileges (e.g., equal access to employment). To this end, we propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round, taking into account not only the class errors but also the fairness-related performance of the model defined cumulatively based on the partial ensemble. Except for the in-training boosting of the group discriminated over each round, AdaFair directly tackles imbalance during the post-training phase by optimizing the number of ensemble learners for balanced error performance. AdaFair can facilitate different parity-based fairness notions and mitigate effectively discriminatory outcomes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Notes

  1. AdaFair (source code and data) available at: https://iosifidisvasileios.github.io/AdaFair.

  2. The notions \(u_i^j\) and \(\epsilon \) will bear the same meaning for the rest of the section.

References

  1. J. United States. Podesta (2014) Big data: seizing opportunities, preserving values. White House, Executive Office of the President

  2. Ingold D, Soper S (2016) Amazon doesn’t consider the race of its customers. Should it. Bloomberg, April

  3. Datta A, Tschantz MC, Datta A (2015) Automated experiments on ad privacy settings. Priv Enhancing Technol 2015(1):92–112

    Article  Google Scholar 

  4. Edelman BG, Luca M (2014) Digital discrimination: the case of airbnb.com

  5. Sweeney L (2013) Discrimination in online ad delivery. arXiv preprint arXiv:1301.6822

  6. Larson J, Mattu S, Kirchner L, Angwin J (2016) How we analyzed the compas recidivism algorithm. ProPublica (5 2016) 9

  7. Krasanakis E, Xioufis ES, Papadopoulos S, Kompatsiaris Y (2018) Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. In: Proceedings of the 2018 world wide web conference on world wide web, WWW 2018, Lyon, France, April 23–27, 2018. ACM, pp 853–862

  8. Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP (2017) Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In: Proceedings of the 26th international conference on world wide web. WWW, pp 1171–1180

  9. Calders T, Kamiran F, Pechenizkiy M (2009) Building classifiers with independency constraints. In: 2009 IEEE ICDM workshops. IEEE, pp 13–18

  10. Calmon FP, Wei D, Vinzamuri B, Ramamurthy KN, Varshney KR (2017) Optimized pre-processing for discrimination prevention. In: Proceedings of the 31st international conference on neural information processing systems, pp 3995–4004

  11. Kamiran F, Calders T (2012) Data preprocessing techniques for classification without discrimination. Knowl Inf Syst 33(1):1–33

    Article  Google Scholar 

  12. Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: Lee DD, Sugiyama M, von Luxburg U, Guyon I, Garnett R (eds) Advances in neural information processing systems 29: annual conference on neural information processing systems 2016, December 5–10, 2016, Barcelona, Spain, pp 3315–3323

  13. Fish B, Kun J, Lelkes ÁD (2016) A confidence-based approach for balancing fairness and accuracy. In: Proceedings of the 2016 SIAM international conference on data mining. SIAM, pp 144–152

  14. Kamiran F, Calders T (2009) Classifying without discriminating. In: Computer, control and communication. IEEE, pp 1–6

  15. Kamiran F, Calders T, Pechenizkiy M (2010) Discrimination aware decision tree learning. In: 2010 IEEE 10th international conference on data mining (ICDM). IEEE, pp 869–874

  16. Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2012) A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern Part C (Appl Rev) 42(4):463–484

    Article  Google Scholar 

  17. Iosifidis V (2020) Semi-supervised learning and fairness-aware learning under class imbalance. Ph.D. thesis, Hannover: Institutionelles Repositorium der Leibniz Universität Hannover

  18. Sagi O, Rokach L (2018) Ensemble learning: a survey. Wiley Interdiscip Rev Data Min Knowl Discov 8(4):e1249

    Article  Google Scholar 

  19. Rokach L (2019) Ensemble learning: pattern classification using ensemble methods. World Scientific, Singapore

    Book  Google Scholar 

  20. Iosifidis V, Ntoutsi E (2019) Adafair: cumulative fairness adaptive boosting. In: Proceedings of the 28th ACM international conference on information and knowledge management, pp 781–790

  21. Iosifidis V, Fetahu B, Ntoutsi E (2019) Fae: a fairness-aware ensemble framework. In: 2019 IEEE international conference on big data (big data). IEEE, pp 1375–1380

  22. Schäfer M, Haun DB, Tomasello M (2015) Fair is not fair everywhere. Psychol Sci 26(8):1252–1260

    Article  Google Scholar 

  23. Verma S, Rubin J (2018) Fairness definitions explained. In: Proceedings of the international workshop on software fairness, FairWare@ICSE 2018, Gothenburg, Sweden, May 29, 2018. ACM, pp 1–7

  24. Ntoutsi E, Fafalios P, Gadiraju U, Iosifidis V, Nejdl W, Vidal M-E, Ruggieri S, Turini F, Papadopoulos S, Krasanakis E et al (2020) Bias in data-driven artificial intelligence systems-an introductory survey. Wiley Interdiscip Rev Data Min Knowl Discov 10(3):e1356

    Article  Google Scholar 

  25. Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R (2012) Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference. ACM, pp 214–226

  26. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A (2021) A survey on bias and fairness in machine learning. ACM Comput Surv (CSUR) 54(6):1–35

    Article  Google Scholar 

  27. Quy TL, Roy A, Iosifidis V, Ntoutsi E (2022) A survey on datasets for fairness-aware machine learning. WIREs Data Min Knowl Discov

  28. Iosifidis V, Ntoutsi E (2018) Dealing with bias via data augmentation in supervised learning scenarios. Jo Bates Paul D. Clough Robert Jäschke, p 24

  29. Hu H, Iosifidis V, Liao W, Zhang H, YingYang M, Ntoutsi E, Rosenhahn B (2020) Fairnn-conjoint learning of fair representations for fair decisions. Discov Sci

  30. Iosifidis V, Tran TNH, Ntoutsi E (2019) Fairness-enhancing interventions in stream classification. In: Database and expert systems applications—30th international conference, DEXA 2019, Linz, Austria, August 26–29, 2019, proceedings, part I, vol 11706. Springer, pp 261–276

  31. Zhang W, Ntoutsi E (2019) FAHT: an adaptive fairness-aware decision tree classifier. In: Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, Macao, China, August 10–16, 2019, pp 1480–1486, ijcai.org

  32. Kamishima T, Akaho S, Asoh H, Sakuma J (2012) Fairness-aware classifier with prejudice remover regularizer. In: European conference on principles of data mining and knowledge discovery. Springer, pp 35–50

  33. Iosifidis V, Ntoutsi E (2020) Fabboo—online fairness-aware learning under class imbalance. In: International conference on discovery science. Springer, pp 159–174

  34. Iosifidis V, Zhang W, Ntoutsi E (2021) Online fairness-aware learning with imbalanced data streams. arXiv preprint arXiv:2108.06231

  35. Pedreschi D, Ruggieri S, Turini F (2009) Measuring discrimination in socially-sensitive decision records. In: Proceedings of the SIAM international conference on data mining, SDM 2009, April 30–May 2, 2009, Sparks, Nevada, USA. SIAM, pp 581–592

  36. Calders T, Verwer S (2010) Three naive Bayes approaches for discrimination-free classification. Data Min Knowl Discov 21(2):277–292

    Article  MathSciNet  Google Scholar 

  37. Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Advances in neural information processing systems 30: annual conference on neural information processing systems 2017, December 4–9, 2017, Long Beach, CA, USA, pp 5680–5689

  38. Brodersen KH, Ong CS, Stephan KE, Buhmann JM (2010) The balanced accuracy and its posterior distribution. In: 2010 20th international conference on pattern recognition. IEEE, pp 3121–3124

  39. Schapire RE (1999) A brief introduction to boosting. In: Dean T (ed) Proceedings of the sixteenth international joint conference on artificial intelligence, IJCAI 99, Stockholm, Sweden, July 31– August 6, 1999, vol 2. Morgan Kaufmann, pp 1401–1406

  40. Sun Y, Kamel MS, Wong AK, Wang Y (2007) Cost-sensitive boosting for classification of imbalanced data. Pattern Recognit 40(12):3358–3378

    Article  Google Scholar 

  41. Schapire RE, Singer Y (1999) Improved boosting algorithms using confidence-rated predictions. Mach Learn 37(3):297–336

    Article  Google Scholar 

  42. Bache K, Lichman M (2013) UCI machine learning repository

  43. Chawla NV, Lazarevic A, Hall LO, Bowyer KW (2003) Smoteboost: improving prediction of the minority class in boosting. In: European conference on principles of data mining and knowledge discovery. Springer, pp 107–119

  44. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) Smote: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357

    Article  Google Scholar 

  45. Roy A, Iosifidis V, Ntoutsi E (2021) Multi-fair Pareto boosting. arXiv preprint arXiv:2104.13312

Download references

Acknowledgements

The work is supported by the Volkswagen Foundation project BIAS (“Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions”) within the initiative “AI and the Society of the Future”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vasileios Iosifidis.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Cumulative versus non-cumulative fairness

Statistical Parity In Fig. 8, we show the comparison of AdaFair versus AdaFair NoCumul w.r.t statistical parity for each dataset. As we see, AdaFair NoCumul produces higher discriminatory outcomes than AdaFair on all datasets. For the Adult census dataset, we observe a 31%\(\uparrow \) increase, 12%\(\uparrow \) increase for the Bank dataset, 15%\(\uparrow \) for the Compass and 15%\(\uparrow \) for the KDD census dataset. The cumulative notion of fairness allows AdaFair to effectively mitigate the discriminatory outcomes in contrast to the non-cumulative version.

In Fig. 9, we compare the per round \(\delta SP\) of AdaFair NoCumul and AdaFair. \(\delta SP\) refers to the fairness-related cost (u) that is assigned to instances based on the discriminatory behavior of the model (Eq. (9)). We observe that AdaFair NoCumul produces fairness-related costs, which highly fluctuate, in contrast to AdaFair, in all the datasets. The non-cumulative version cannot stabilize the fairness-related costs since it depends on the behavior of individual weak learns rather than the cumulative behavior of the model.

Fig. 9
figure 9

Statistical parity, fairness-related costs per boosting round: AdaFair versus AdaFair NoCumul

Equal Opportunity In Fig. 10, we show the comparison of AdaFair versus AdaFair NoCumul w.r.t equal opportunity for each dataset. Same as in the statistical parity case, AdaFair NoCumul produces more discriminatory outcomes in contrast to AdaFair. For the Adult census dataset, there is a 15%\(\uparrow \) increase, 2%\(\uparrow \) increase for the Bank dataset, 12%\(\uparrow \) increase for the Compass, and 8%\(\uparrow \) increase for the KDD census dataset.

Similar behavior to statistical parity is also observed in Fig. 11, where we report \(\delta \text {FNR}\) values for the cumulative and non-cumulative approaches; \(\delta \text {FNR}\) values are employed as fairness-related costs and are derived from Eq. (11). The non-cumulative version is unstable and produces highly fluctuating fairness-related costs in contrast to AdaFair in all datasets.

Fig. 10
figure 10

Equal opportunity: AdaFair versus AdaFair NoCumul

Fig. 11
figure 11

Equal opportunity, fairness-related costs per boosting round: AdaFair versus AdaFair NoCumul

Fig. 12
figure 12

Statistical parity: impact of parameter c

1.2 The effect of balanced error

We show the impact of parameter c for all the employed fairness notions in Figs. 12 and 13.

Statistical Parity In Fig. 12, we show the impact of parameter c in case of statistical parity. As we observe, all the imbalanced datasets show the worst performance in terms of balanced accuracy when \(c=0\); however, statistical parity is close to 0. As the parameter c increases, the balanced accuracy increases and the statistical parity remains close to 0. However, in the case of statistical parity, we observe that the balanced accuracy is not affected significantly in contrast to the other two fairness notions. Such behavior is caused due to the fairness’ notion, which forces parity between protected and non-protected groups on the predicted outcomes; thus, statistical parity can force AdaFair to predict more instances in the positive class indirectly.

Equal Opportunity In Fig. 13, we show the impact of c when AdaFair tunes for equal opportunity. Similar to disparate mistreatment, AdaFair can maintain its low discrimination values w.r.t equal opportunity and at the same time increase the balanced accuracy as the parameter c increases. For example, AdaFair’s balanced accuracy increases 8% for \(c=0\) to \(c=1\) and at the same time equal opportunity is close to 0. This behavior is similar for all the employed imbalanced datasets. For the Compass dataset, the parameter c does not affect the performance significantly since the dataset is class balanced.

Fig. 13
figure 13

Equal opportunity: impact of parameter c

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Iosifidis, V., Roy, A. & Ntoutsi, E. Parity-based cumulative fairness-aware boosting. Knowl Inf Syst 64, 2737–2770 (2022). https://doi.org/10.1007/s10115-022-01723-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-022-01723-3

Keywords

Navigation