Advertisement

FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret

Conference paper
  • 900 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

Algorithmic decision making based on computer vision and machine learning methods continues to permeate our lives. But issues related to biases of these models and the extent to which they treat certain segments of the population unfairly, have led to legitimate concerns. There is agreement that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models. An interesting topic is the study of mechanisms via which the de novo design or training of the model can be informed by fairness measures. Here, we study strategies to impose fairness concurrently while training the model. While many fairness based approaches in vision rely on training adversarial modules together with the primary classification/regression task, in an effort to remove the influence of the protected attribute or variable, we show how ideas based on well-known optimization concepts can provide a simpler alternative. In our proposal, imposing fairness just requires specifying the protected attribute and utilizing our routine. We provide a detailed technical analysis and present experiments demonstrating that various fairness measures can be reliably imposed on a number of training tasks in vision in a manner that is interpretable.

Notes

Acknowledgments

The authors are grateful to Akshay Mishra for help and suggestions. Research supported by NIH R01 AG062336, NSF CAREER RI\(\#\)1252725, NSF 1918211, NIH RF1 AG05931201A1, NIH RF1AG05986901, UW CPCP (U54 AI117924) and American Family Insurance. Sathya Ravi was also supported by UIC-ICR start-up funds. Correspondence should be directed to Ravi or Singh.

Supplementary material

504453_1_En_22_MOESM1_ESM.pdf (4.6 mb)
Supplementary material 1 (pdf 4746 KB)

References

  1. 1.
    Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification. arXiv preprint arXiv:1803.02453 (2018)
  2. 2.
    Asi, H., Duchi, J.C.: Stochastic (approximate) proximal point methods: convergence, optimality, and adaptivity. SIAM J. Optim. 29(3), 2257–2290 (2019)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Barnett, S.A.: Convergence problems with generative adversarial networks (GANs). arXiv preprint arXiv:1806.11382 (2018)
  4. 4.
    Bechavod, Y., Ligett, K.: Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044 (2017)
  5. 5.
    Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (2014)zbMATHGoogle Scholar
  6. 6.
    Blackburn, K., Schirillo, J.: Emotive hemispheric differences measured in real-life portraits using pupil diameter and subjective aesthetic preferences. Exp. Brain Res. 219(4), 447–455 (2012).  https://doi.org/10.1007/s00221-012-3091-yCrossRefGoogle Scholar
  7. 7.
    Böhlen, M., Chandola, V., Salunkhe, A.: Server, server in the cloud. Who is the fairest in the crowd? arXiv preprint arXiv:1711.08801 (2017)
  8. 8.
    Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V., Kalai, A.T.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Advances in Neural Information Processing Systems, pp. 4349–4357 (2016)Google Scholar
  9. 9.
    Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91 (2018)Google Scholar
  10. 10.
    Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K.N., Varshney, K.R.: Optimized pre-processing for discrimination prevention. In: Advances in Neural Information Processing Systems, pp. 3992–4001 (2017)Google Scholar
  11. 11.
    Celis, L.E., Huang, L., Keswani, V., Vishnoi, N.K.: Classification with fairness constraints: a meta-algorithm with provable guarantees. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 319–328. ACM (2019)Google Scholar
  12. 12.
    Chaudhari, P., Soatto, S.: Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. In: 2018 Information Theory and Applications Workshop (ITA). IEEE (2018)Google Scholar
  13. 13.
    Chin, C.: Assessing employer intent when AI hiring tools are biased, December 2019. https://www.brookings.edu/research/assessing-employer-intent-when-ai-hiring-tools-are-biased/
  14. 14.
    Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big data 5(2), 153–163 (2017)CrossRefGoogle Scholar
  15. 15.
    Cotter, A., Jiang, H., Sridharan, K.: Two-player games for efficient non-convex constrained optimization. arXiv preprint arXiv:1804.06500 (2018)
  16. 16.
    Cotter, A., et al.: Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. arXiv preprint arXiv:1809.04198 (2018)
  17. 17.
    Courtland, R.: Bias detectives: the researchers striving to make algorithms fair. Nature 558(7710), 357–357 (2018)CrossRefGoogle Scholar
  18. 18.
    Donini, M., Oneto, L., Ben-David, S., Shawe-Taylor, J.S., Pontil, M.: Empirical risk minimization under fairness constraints. In: Advances in Neural Information Processing Systems, pp. 2791–2801 (2018)Google Scholar
  19. 19.
    Duchi, J.C., Bartlett, P.L., Wainwright, M.J.: Randomized smoothing for stochastic optimization. SIAM J. Optim. 22(2), 674–701 (2012)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Fawzi, A., Frossard, P.: Measuring the effect of nuisance variables on classifiers, pp. 137.1–137.12, January 2016.  https://doi.org/10.5244/C.30.137
  21. 21.
    Fish, B., Kun, J., Lelkes, Á.D.: A confidence-based approach for balancing fairness and accuracy. In: Proceedings of the 2016 SIAM International Conference on Data Mining, pp. 144–152. SIAM (2016)Google Scholar
  22. 22.
    Goh, G., Cotter, A., Gupta, M., Friedlander, M.P.: Satisfying real-world goals with dataset constraints. In: Advances in Neural Information Processing Systems, pp. 2415–2423 (2016)Google Scholar
  23. 23.
    Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a "right to explanation". AI Mag. 38(3), 50–57 (2017).  https://doi.org/10.1609/aimag.v38i3.2741. https://www.aaai.org/ojs/index.php/aimagazine/article/view/2741CrossRefGoogle Scholar
  24. 24.
    Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)Google Scholar
  25. 25.
    Heilweil, R.: Artificial intelligence will help determine if you get your next job, December 2019. https://www.vox.com/recode/2019/12/12/20993665/artificial-intelligence-ai-job-screen
  26. 26.
    Jaeger, S., Candemir, S., Antani, S., Wáng, Y.X.J., Lu, P.X., Thoma, G.: Two public chest x-ray datasets for computer-aided screening of pulmonary diseases. Quant. Imaging Med. Surg. 4(6), 475 (2014)Google Scholar
  27. 27.
    Kamiran, F., Calders, T.: Classification with no discrimination by preferential sampling. In: Proceedings of the 19th Machine Learning Conference of Belgium and The Netherlands, pp. 1–6. Citeseer (2010)Google Scholar
  28. 28.
    Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: auditing and learning for subgroup fairness. arXiv preprint arXiv:1711.05144 (2017)
  29. 29.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale CelebFaces attributes (CelebA) dataset (2018). Accessed 15 Aug 2018Google Scholar
  30. 30.
    Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (2006).  https://doi.org/10.1007/978-0-387-40065-5CrossRefzbMATHGoogle Scholar
  31. 31.
    Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019)CrossRefGoogle Scholar
  32. 32.
    Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 127–239 (2014)CrossRefGoogle Scholar
  33. 33.
    Quadrianto, N., Sharmanska, V., Thomas, O.: Discovering fair representations in the data domain. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8227–8236 (2019)Google Scholar
  34. 34.
    Ryu, H.J., Adam, H., Mitchell, M.: InclusiveFaceNet: improving face attribute detection with race and gender diversity. arXiv preprint arXiv:1712.00193 (2017)
  35. 35.
    Sattigeri, P., Hoffman, S.C., Chenthamarakshan, V., Varshney, K.R.: Fairness GAN. arXiv preprint arXiv:1805.09910 (2018)
  36. 36.
    Schäfer, F., Anandkumar, A.: Competitive gradient descent. In: Advances in Neural Information Processing Systems, pp. 7625–7635 (2019)Google Scholar
  37. 37.
    Shalev-Shwartz, S., et al.: Online learning and online convex optimization. Found. Trends® Mach. Learn. 4(2), 107–194 (2012)CrossRefGoogle Scholar
  38. 38.
    Ustun, B., Rudin, C.: Learning optimized risk scores from large-scale datasets. Stat 1050, 1 (2016)Google Scholar
  39. 39.
    Woodworth, B., Gunasekar, S., Ohannessian, M.I., Srebro, N.: Learning non-discriminatory predictors. arXiv preprint arXiv:1702.06081 (2017)
  40. 40.
    Yao, S., Huang, B.: Beyond parity: fairness objectives for collaborative filtering. In: Advances in Neural Information Processing Systems, pp. 2921–2930 (2017)Google Scholar
  41. 41.
    Yatskar, M., Zettlemoyer, L., Farhadi, A.: Situation recognition: visual semantic role labeling for image understanding. In: Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  42. 42.
    Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180. International World Wide Web Conferences Steering Committee (2017)Google Scholar
  43. 43.
    Zafar, M.B., Valera, I., Rodriguez, M., Gummadi, K., Weller, A.: From parity to preference-based notions of fairness in classification. In: Advances in Neural Information Processing Systems, pp. 229–239 (2017)Google Scholar
  44. 44.
    Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340 (2018)Google Scholar
  45. 45.
    Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)
  46. 46.
    Zhao, J., Wang, T., Yatskar, M., Cotterell, R., Ordonez, V., Chang, K.W.: Gender bias in contextualized word embeddings. arXiv preprint arXiv:1904.03310 (2019)
  47. 47.
    Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W.: Men also like shopping: reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017)
  48. 48.
    Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. CoRR abs/1512.04150 (2015). http://arxiv.org/abs/1512.04150
  49. 49.
    Zhou, H.H., Singh, V., Johnson, S.C., Wahba, G., Alzheimer’s Disease Neuroimaging Initiative, et al.: Statistical tests and identifiability conditions for pooling and analyzing multisite datasets. Proc. Natl. Acad. Sci. 115(7), 1481–1486 (2018)Google Scholar
  50. 50.
    Zuber-Skerritt, O., Cendon, E.: Critical reflection on professional development in the social sciences: interview results. Int. J. Res. Dev. 5(1), 16–32 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of Wisconsin-MadisonMadisonUSA
  2. 2.University of Illinois at ChicagoChicagoUSA

Personalised recommendations