Chapter

Machine Learning and Knowledge Discovery in Databases

Volume 7524 of the series Lecture Notes in Computer Science pp 35-50

Fairness-Aware Classifier with Prejudice Remover Regularizer

  • Toshihiro KamishimaAffiliated withNational Institute of Advanced Industrial Science and Technology (AIST)
  • , Shotaro AkahoAffiliated withNational Institute of Advanced Industrial Science and Technology (AIST)
  • , Hideki AsohAffiliated withNational Institute of Advanced Industrial Science and Technology (AIST)
  • , Jun SakumaAffiliated withUniversity of TsukubaJapan Science and Technology Agency

* Final gross prices may vary according to local VAT.

Get Access

Abstract

With the spread of data mining technologies and the accumulation of social data, such technologies and data are being used for determinations that seriously affect individuals’ lives. For example, credit scoring is frequently determined based on the records of past credit data together with statistical prediction techniques. Needless to say, such determinations must be nondiscriminatory and fair in sensitive features, such as race, gender, religion, and so on. Several researchers have recently begun to attempt the development of analysis techniques that are aware of social fairness or discrimination. They have shown that simply avoiding the use of sensitive features is insufficient for eliminating biases in determinations, due to the indirect influence of sensitive information. In this paper, we first discuss three causes of unfairness in machine learning. We then propose a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models. We further apply this approach to logistic regression and empirically show its effectiveness and efficiency.

Keywords

fairness discrimination logistic regression classification social responsibility information theory