Skip to main content

F3: Fair and Federated Face Attribute Classification with Heterogeneous Data

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2023)

Abstract

Fairness across different demographic groups is an essential criterion for face-related tasks, Face Attribute Classification (FAC) being a prominent example. Simultaneously, federated Learning (FL) is gaining traction as a scalable paradigm for distributed training. In FL, client models trained on private datasets get aggregated by a central aggregator. Existing FL approaches require data homogeneity to ensure fairness. However, this assumption is restrictive in real-world settings. E.g., geographically distant or closely associated clients may have heterogeneous data. In this paper, we observe that existing techniques for ensuring fairness are not viable for FL with data heterogeneity. We introduce F3, an FL framework for fair FAC under data heterogeneity. We propose two methodologies in F3, (i) Heuristic-based and (ii) Gradient-based, to improve fairness across demographic groups without requiring data homogeneity assumption. We demonstrate the efficacy of our approaches through empirically observed fairness measures and accuracy guarantees on popular face datasets. Using Mahalanobis distance, we show that F3 obtains a practical balance between accuracy and fairness for FAC. The code is available at: github.com/magnetar-iiith/F3.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: ICML, pp. 60–69 (2018)

    Google Scholar 

  2. Augenstein, S., Hard, A., Partridge, K., Mathews, R.: Jointly learning from decentralized (federated) and centralized data to mitigate distribution shift. In: NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications (2021)

    Google Scholar 

  3. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)

    Article  Google Scholar 

  4. Ezzeldin, Y.H., Yan, S., He, C., Ferrara, E., Avestimehr, S.: FairFed: enabling group fairness in federated learning. In: NeurIPS Workshop on New Frontiers in Federated Learning (NFFL) (2021)

    Google Scholar 

  5. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NeurIPS, vol. 29, pp. 3315–3323 (2016)

    Google Scholar 

  6. Hu, S., Wu, Z.S., Smith, V.: Provably fair federated learning via bounded group loss. In: ICLR Workshop on Socially Responsible Machine Learning (2022)

    Google Scholar 

  7. Jung, S., Chun, S., Moon, T.: Learning fair classifiers with partially annotated group labels. In: CVPR, pp. 10348–10357 (2022)

    Google Scholar 

  8. Kanaparthy, S., Padala, M., Damle, S., Sarvadevabhatla, R.K., Gujar, S.: F3: fair and federated face attribute classification with heterogeneous data. arXiv preprint arXiv:2109.02351 (2021)

  9. Karkkainen, K., Joo, J.: Fairface: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In: WACV, pp. 1548–1558 (2021)

    Google Scholar 

  10. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR, pp. 4401–4410 (2019)

    Google Scholar 

  11. Konečnỳ, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016)

  12. Lokhande, V.S., Akash, A.K., Ravi, S.N., Singh, V.: FairALM: augmented Lagrangian method for training fair models with little regret. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 365–381. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_22

    Chapter  Google Scholar 

  13. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS (2017)

    Google Scholar 

  14. Padala, M., Damle, S., Gujar, S.: Federated learning meets fairness and differential privacy. In: ICONIP, pp. 692–699 (2021)

    Google Scholar 

  15. Padala, M., Gujar, S.: FNNC: achieving fairness through neural networks. In: IJCAI, pp. 2277–2283 (2020)

    Google Scholar 

  16. Ruan, Y., Joe-Wong, C.: Fedsoft: soft clustered federated learning with proximal local updating. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8124–8131 (2022)

    Google Scholar 

  17. Terhörst, P., et al.: A comprehensive study on face recognition biases beyond demographics. IEEE Trans. Technol. Soc. 3(1), 16–30 (2021)

    Article  Google Scholar 

  18. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR, pp. 1521–1528 (2011)

    Google Scholar 

  19. Zafar, M.B., Valera, I., Rogriguez, M.G., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. In: AISTATS, pp. 962–970 (2017)

    Google Scholar 

  20. Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: AIES, pp. 335–340 (2018)

    Google Scholar 

  21. Zhang, D.Y., Kou, Z., Wang, D.: FairFL: a fair federated learning approach to reducing demographic bias in privacy-sensitive classification models. In: IEEE Big Data, pp. 1051–1060 (2020)

    Google Scholar 

  22. Zhang, J., Wu, Y., Pan, R.: Incentive mechanism for horizontal federated learning based on reputation and reverse auction. In: WWW, pp. 947–956 (2021)

    Google Scholar 

  23. Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: CVPR, pp. 5810–5818 (2017)

    Google Scholar 

  24. Zhao, H., Gordon, G.: Inherent tradeoffs in learning fair representations. In: NeurIPS, vol. 32, pp. 15675–15685 (2019)

    Google Scholar 

  25. Zheng, X., Guo, Y., Huang, H., Li, Y., He, R.: A survey of deep facial attribute analysis. Int. J. Comput. Vision 128(8), 2002–2034 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manisha Padala .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kanaparthy, S., Padala, M., Damle, S., Sarvadevabhatla, R.K., Gujar, S. (2023). F3: Fair and Federated Face Attribute Classification with Heterogeneous Data. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13935. Springer, Cham. https://doi.org/10.1007/978-3-031-33374-3_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33374-3_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33373-6

  • Online ISBN: 978-3-031-33374-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics