Advertisement

Gossip Learning as a Decentralized Alternative to Federated Learning

  • István Hegedűs
  • Gábor Danner
  • Márk JelasityEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11534)

Abstract

Federated learning is a distributed machine learning approach for computing models over data collected by edge devices. Most importantly, the data itself is not collected centrally, but a master-worker architecture is applied where a master node performs aggregation and the edge devices are the workers, not unlike the parameter server approach. Gossip learning also assumes that the data remains at the edge devices, but it requires no aggregation server or any central component. In this empirical study, we present a thorough comparison of the two approaches. We examine the aggregated cost of machine learning in both cases, considering also a compression technique applicable in both approaches. We apply a real churn trace as well collected over mobile phones, and we also experiment with different distributions of the training data over the devices. Surprisingly, gossip learning actually outperforms federated learning in all the scenarios where the training data are distributed uniformly over the nodes, and it performs comparably to federated learning overall.

References

  1. 1.
    Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) (2013)Google Scholar
  2. 2.
    Berta, Á., Bilicki, V., Jelasity, M.: Defining and understanding smartphone churn over the internet: a measurement study. In: Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P 2014). IEEE (2014)Google Scholar
  3. 3.
    Bonawitz, K., et al.: Practical secure aggregation for federated learning on user-held data. In: NIPS Workshop on Private Multi-Party Machine Learning (2016)Google Scholar
  4. 4.
    Danner, G., Berta, Á., Hegedűs, I., Jelasity, M.: Robust fully distributed mini-batch gradient descent with privacy preservation. Secur. Commun. Netw. 2018, 15 (2018). Article no. 6728020CrossRefGoogle Scholar
  5. 5.
    Danner, G., Jelasity, M.: Robust decentralized mean estimation with limited communication. In: Aldinucci, M., Padovani, L., Torquati, M. (eds.) Euro-Par 2018. LNCS, vol. 11014, pp. 447–461. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-96983-1_32CrossRefGoogle Scholar
  6. 6.
    Danner, G., Jelasity, M.: Token account algorithms: the best of the proactive and reactive worlds. In: Proceedings of the 38th International Conference on Distributed Computing Systems (ICDCS 2018), pp. 885–895. IEEE Computer Society (2018)Google Scholar
  7. 7.
    Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS 2012, vol. 1, pp. 1223–1231. Curran Associates Inc., USA (2012)Google Scholar
  8. 8.
    Dua, D., Graff, C.: UCI machine learning repository (2019). http://archive.ics.uci.edu/ml
  9. 9.
  10. 10.
    Hegedűs, I., Berta, Á., Kocsis, L., Benczúr, A.A., Jelasity, M.: Robust decentralized low-rank matrix decomposition. ACM Trans. Intell. Syst. Technol. 7(4), 62:1–62:24 (2016)CrossRefGoogle Scholar
  11. 11.
    Jelasity, M., Voulgaris, S., Guerraoui, R., Kermarrec, A.M., van Steen, M.: Gossip-based peer sampling. ACM Trans. Comput. Syst. 25(3), 8 (2007)CrossRefGoogle Scholar
  12. 12.
    Konecný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. In: Private Multi-Party Machine Learning (NIPS 2016 Workshop) (2016)Google Scholar
  13. 13.
    McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Singh, A., Zhu, J. (eds.) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR, Fort Lauderdale, FL, USA, 20–22 April 2017Google Scholar
  14. 14.
    Montresor, A., Jelasity, M.: PeerSim: a scalable P2P simulator. In: Proceedings of the 9th IEEE International Conference on Peer-to-Peer Computing (P2P 2009), pp. 99–100. IEEE, Seattle, Washington, USA, September 2009. Extended abstractGoogle Scholar
  15. 15.
    Ormándi, R., Hegedűs, I., Jelasity, M.: Gossip learning with linear models on fully distributed data. Concurr. Comp. Pract. Exp. 25(4), 556–571 (2013)CrossRefGoogle Scholar
  16. 16.
    Roverso, R., Dowling, J., Jelasity, M.: Through the wormhole: low cost, fresh peer sampling for the internet. In: Proceedings of the 13th IEEE International Conference on Peer-to-Peer Computing (P2P 2013). IEEE (2013)Google Scholar
  17. 17.
    Wang, J., Cao, B., Yu, P.S., Sun, L., Bao, W., Zhu, X.: Deep learning towards mobile applications. In: IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1385–1393, July 2018Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  1. 1.University of SzegedSzegedHungary
  2. 2.MTA SZTE Research Group on Artificial IntelligenceSzegedHungary

Personalised recommendations