Advertisement

Federated Visual Classification with Real-World Data Distribution

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12355)

Abstract

Federated Learning enables visual models to be trained on-device, bringing advantages for user privacy (data need never leave the device), but challenges in terms of data diversity and quality. Whilst typical models in the datacenter are trained using data that are independent and identically distributed (IID), data at source are typically far from IID. Furthermore, differing quantities of data are typically available at each device (imbalance). In this work, we characterize the effect these real-world data distributions have on distributed learning, using as a benchmark the standard Federated Averaging (FedAvg) algorithm. To do so, we introduce two new large-scale datasets for species and landmark classification, with realistic per-user data splits that simulate real-world edge learning scenarios. We also develop two new algorithms (FedVC, FedIR) that intelligently resample and reweight over the client pool, bringing large improvements in accuracy and stability in training. The datasets are made available online.

Notes

Acknowledgements

We thank Andre Araujo, Grace Chu, Tobias Weyand, Bingyi Cao, Huizhong Chen, Tomer Meron, and Hartwig Adam for their valuable feedback and support.

Supplementary material

504449_1_En_5_MOESM1_ESM.pdf (612 kb)
Supplementary material 1 (pdf 611 KB)

References

  1. 1.
    Bonawitz, K., et al.: Towards federated learning at scale: System design. In: SysML 2019 (2019). https://arxiv.org/abs/1902.01046
  2. 2.
    Caldas, S., et al.: LEAF: a benchmark for federated settings. arXiv preprint arXiv:1812.01097 (2018)
  3. 3.
    Cohen, G., Afshar, S., Tapson, J., Van Schaik, A.: EMNIST: extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2921–2926. IEEE (2017)Google Scholar
  4. 4.
    Deng, J., et al.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. IEEE (2009)Google Scholar
  5. 5.
    Doersch, C., Singh, S., Gupta, A., Sivic, J., Efros, A.A.: What makes Paris look like Paris? Commun. ACM 58(12), 103–110 (2015)CrossRefGoogle Scholar
  6. 6.
    Google: TensorFlow Federated (2019). https://www.tensorflow.org/federated
  7. 7.
  8. 8.
    Hays, J., Efros, A.A.: IM2GPS: estimating geographic information from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8. IEEE (2008)Google Scholar
  9. 9.
    Hesterberg, T.: Weighted average importance sampling and defensive mixture distributions. Technometrics 37(2), 185–194 (1995)CrossRefGoogle Scholar
  10. 10.
    Hsieh, K., Phanishayee, A., Mutlu, O., Gibbons, P.B.: The non-IID data quagmire of decentralized machine learning. arXiv preprint arXiv:1910.00189 (2019)
  11. 11.
    Hsu, T.M.H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019)
  12. 12.
    Kahn, H., Marshall, A.W.: Methods of reducing sample size in Monte Carlo computations. J. Oper. Res. Soc. Am. 1(5), 263–278 (1953)zbMATHGoogle Scholar
  13. 13.
    Kairouz, P., et al.: Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019)
  14. 14.
    Karimireddy, S.P., et al.: Scaffold: stochastic controlled averaging for on-device federated learning. arXiv preprint arXiv:1910.06378 (2019)
  15. 15.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  16. 16.
    Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. arXiv preprint arXiv:1908.07873 (2019)
  17. 17.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)Google Scholar
  18. 18.
    Luo, J., et al.: Real-world image datasets for federated learning. arXiv preprint arXiv:1910.11089 (2019)
  19. 19.
    McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)Google Scholar
  20. 20.
    McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: International Conference on Learning Representations (ICLR) (2018)Google Scholar
  21. 21.
    Ngiam, J., et al.: Domain adaptive transfer learning with specialist models. arXiv preprint arXiv:1811.07056 (2018)
  22. 22.
    Ozaki, K., Yokoo, S.: Large-scale landmark retrieval/recognition under a noisy and diverse dataset. arXiv preprint arXiv:1906.04087 (2019)
  23. 23.
    Saerens, M., Latinne, P., Decaestecker, C.: Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure. Neural Comput. 14(1), 21–41 (2002)CrossRefGoogle Scholar
  24. 24.
    Sahu, A.K., Li, T., Sanjabi, M., Zaheer, M., Talwalkar, A., Smith, V.: On the convergence of federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127 (2018)
  25. 25.
    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018)Google Scholar
  26. 26.
    Sattler, F., Wiedemann, S., Müller, K.R., Samek, W.: Robust and communication-efficient federated learning from non-IID data. arXiv preprint arXiv:1903.02891 (2019)
  27. 27.
    Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)Google Scholar
  28. 28.
    Van Horn, G., et al.: The iNaturalist species classification and detection dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8769–8778 (2018)Google Scholar
  29. 29.
    Van Horn, G., Perona, P.: The devil is in the tails: Fine-grained classification in the wild. arXiv preprint arXiv:1709.01450 (2017)
  30. 30.
    Weyand, T., Araujo, A., Cao, B., Sim, J.: Google Landmarks Dataset v2 - a large-scale benchmark for instance-level recognition and retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2020)Google Scholar
  31. 31.
    Weyand, T., Kostrikov, I., Philbin, J.: PLaNet-photo geolocation with convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision – ECCV 2016. Lecture Notes in Computer Science, vol. 9912, pp. 37–55. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_3CrossRefGoogle Scholar
  32. 32.
    Wu, Y., He, K.: Group normalization. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. Lecture Notes in Computer Science, vol. 11217. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01261-8_1CrossRefGoogle Scholar
  33. 33.
    Yurochkin, M., et al.: Bayesian nonparametric federated learning of neural networks. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 7252–7261 (2019)Google Scholar
  34. 34.
    Zhang, K., Schölkopf, B., Muandet, K., Wang, Z.: Domain adaptation under target and conditional shift. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 819–827 (2013)Google Scholar
  35. 35.
    Zhao, Y., et al.: Federated learning with non-IID data. arXiv preprint arXiv:1806.00582 (2018)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.MIT CSAILCambridgeUSA
  2. 2.Google ResearchSeattleUSA

Personalised recommendations