Skip to main content
Log in

Making resource adaptive to federated learning with COTS mobile devices

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

Mobile devices are pervasive data producers that bridges users and emerging network services, e.g., learning techniques. Today, mobile devices are continuously generating user related data, which at most time is privacy concerned, at the edge of network. Indeed, such privacy concerned and plentiful data is naturally in-depth coupled with modern distributed learning paradigms, e.g., federated learning. However, modern distributed learning paradigms even those based on cloud computing can still result in heavy computation burden on resource constrained mobile environments. In this paper, we tackle the challenge arising from modern distributed learning with resource constrained mobile devices by no data delivery. To this end, this paper proposes MobiFed, a resource adaptive distributed learning system for mobile scenarios to address the resource heterogeneity issue in commercial off-the-shelf (COTS) mobile federated community. Experimental results demonstrate that MobiFed not only reduces the system time overheads to achieve the promised learning accuracy (by up to \(47\%\)), but also improves the quality of global federated learning system, e.g., almost \(10\%\) and even higher accuracy than the promised performance in comparing with existing other systems. In addition, MobiFed provides a user friendly and self-managed resource mechanism, is tolerant for computation fault recovery, and provides excellent extensibility for potential mobile device’s plugging-in operations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Cheng N, Lyu F, Chen J, Xu W, Zhou H, Zhang S, Shen X (2018) Big data driven vehicular networks. IEEE Netw 32(6):160–167

    Article  Google Scholar 

  2. Lyu F, Ren J, Cheng N, Yang P, Li M, Zhang Y, Shen X (2021) LeaD: Large-scale edge cache deployment based on spatio-temporal wifi traffic statistics. IEEE Trans Mob Comput 20(8):2607–2623

    Article  Google Scholar 

  3. Mao Y, You C, Zhang J, Huang K, Letaief KB (2017) Mobile edge computing: Survey and research outlook. arXiv preprint arXiv:1701.01090

  4. Quan W, Liu Y, Zhang H, Yu S (2017) Enhancing crowd collaborations for software defined vehicular networks. IEEE Commun Mag 55(8):80–86

    Article  Google Scholar 

  5. Lyu F, Cheng N, Zhu H, Zhou H, Xu W, Li M, Shen X (2021) Towards rear-end collision avoidance: Adaptive beaconing for connected vehicles. IEEE Trans Intell Transp Syst 22(2):1248–1263

    Article  Google Scholar 

  6. Lyu F, Zhu H, Cheng N, Zhou H, Xu W, Li M, Shen X (2020) Characterizing urban vehicle-to-vehicle communications for reliable safety applications. IEEE Trans Intell Transp Syst 21(6):2586–2602

    Article  Google Scholar 

  7. Wu H, Lyu F, Zhou C, Chen J, Wang L, Shen X (2020) Optimal uav caching and trajectory in aerial-assisted vehicular networks: A learning-based approach. IEEE J Sel Areas Commun 38(12):2783–2797

    Article  Google Scholar 

  8. Zhang H, Quan W (2021) Networking automation and intelligence: A new era of network innovation. Engineering

  9. Huynh LN, Balan RK, Lee Y (2016) Deepsense: A gpu-based deep convolutional neural network framework on commodity mobile devices. In Proceedings of the 2016 Workshop on Wearable Systems and Applications. ACM pp. 25–30

  10. Lane ND, Bhattacharya S, Georgiev P, Forlivesi C, Jiao L, Qendro L, Kawsar F (2016) Deepx: A software accelerator for low-power deep learning inference on mobile devices. In Proceedings of the 15th International Conference on Information Processing in Sensor Networks. IEEE Press p. 23

  11. Peng Y, Bao Y, Chen Y, Wu C, Guo C (2018) Optimus: an efficient dynamic resource scheduler for deep learning clusters. In Proceedings of the Thirteenth EuroSys Conference. ACM p. 3

  12. Chen TY-H, Ravindranath L, Deng S, Bahl P, Balakrishnan H (2015) Glimpse: Continuous, real-time object recognition on mobile devices. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. ACM pp. 155–168

  13. Han S, Shen H, Philipose M, Agarwal S, Wolman A, Krishnamurthy A (2016) Mcdnn: An approximation-based execution framework for deep stream processing under resource constraints. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. ACM pp. 123–136

  14. Botta A, De Donato W, Persico V, Pescapé A (2016) Integration of cloud computing and internet of things: a survey. Futur Gener Comput Syst 56:684–700

    Article  Google Scholar 

  15. Estrada-Jiménez J, Parra-Arnau J, Rodríguez-Hoyos A, Forné J (2017) Online advertising: Analysis of privacy threats and protection approaches. Comput Commun 100:32–51

    Article  Google Scholar 

  16. Pisani F, Brunetta JR, do Rosario VM, Borin E (2017) Beyond the fog: Bringing cross-platform code execution to constrained iot devices. In 2017 29th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). IEEE pp. 17–24

  17. Shi S, Wang Q, Xu P, Chu X (2016) Benchmarking state-of-the-art deep learning software tools. In 2016 7th International Conference on Cloud Computing and Big Data (CCBD). IEEE pp. 99–104

  18. Zhang Q, Yang LT, Chen Z, Li P, Deen MJ (2017) Privacy-preserving double-projection deep computation model with crowdsourcing on cloud for big data feature learning. IEEE Internet Things J 5(4):2896–2903

    Article  Google Scholar 

  19. McMahan HB, Moore E, Ramage D, Hampson S et al (2016) Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629

  20. Nishio T, Yonetani R (2018) Client selection for federated learning with heterogeneous resources in mobile edge. arXiv preprint arXiv:1804.08333

  21. Tran NH, Bao W, Zomaya A, Hong CS  Federated learning over wireless networks: Optimization model design and analysis. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications (2019), IEEE, pp. 1387–1395

  22. Wang S, Tuor T, Salonidis T, Leung KK, Makaya C, He T, Chan K (2018) When edge meets learning: Adaptive control for resource-constrained distributed machine learning. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE pp. 63–71

  23. Jiang J, Cui B, Zhang C, Yu L (2017) Heterogeneity-aware distributed parameter servers. In Proceedings of the 2017 ACM International Conference on Management of Data. ACM pp. 463–478

  24. Kumar D, Ramkumar AA, Sindhu R, Chandra A (2019) Decaf: Iterative collaborative processing over the edge. In 2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19)

  25. Li M, Andersen DG, Park JW, Smola AJ, Ahmed A, Josifovski V, Long J, Shekita EJ, Su B-Y (2014) Scaling distributed machine learning with the parameter server. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14) pp. 583–598

  26. Mao J, Chen X, Nixon KW, Krieger C, Chen Y (2014) Modnn: Local distributed mobile computing system for deep neural network. In Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE pp. 1396–1401

  27. Mao J, Qin Z, Xu Z, Nixon KW, Chen X, Li H, Chen Y (2017) Adalearner: An adaptive distributed mobile learning system for neural networks. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE pp. 291–296

  28. LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Google Scholar 

  29. LeCun Y (1998) The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  30. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  31. Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images. Tech. rep, Citeseer

    Google Scholar 

  32. Caldas S, Wu P, Li T, Konečnỳ J, McMahan HB, Smith V, Talwalkar A (2018) Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097

  33. Cohen G, Afshar S, Tapson J, van Schaik A (2017) Emnist: an extension of mnist to handwritten letters. arXiv preprint arXiv:1702.05373

  34. Jiang J, Yu L, Jiang J, Liu Y, Cui B (2017) Angel: a new large-scale machine learning system. Natl Sci Rev 5(2):216–236

    Article  Google Scholar 

  35. Team D et al (2016) Deeplearning4j: Open-source distributed deep learning for the jvm. Apache Software Foundation License 2

  36. Gu J, Chowdhury M, Shin KG, Zhu Y, Jeon M, Qian J, Liu H, Guo C (2019) Tiresias: A gpu cluster manager for distributed deep learning. In 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19) pp. 485–500

  37. Bao Y, Peng Y, Wu C, Li Z (2018) Online job scheduling in distributed machine learning clusters. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE pp. 495–503

  38. Bao Y, Peng Y, Wu C (2019) Deep learning-based job placement in distributed machine learning clusters. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE pp. 505–513

  39. Cui H, Zhang H, Ganger GR, Gibbons PB, Xing EP (2016) Geeps: Scalable deep learning on distributed gpus with a gpu-specialized parameter server. In Proceedings of the Eleventh European Conference on Computer Systems. ACM p. 4

  40. Lin Y, Han S, Mao H, Wang Y, Dally WJ (2017) Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887

  41. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M et al (2016) Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) pp. 265–283

  42. Chen T, Li M, Li Y, Lin M, Wang N, Wang M, Xiao T, Xu B, Zhang C, Zhang Z (2015) Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274

  43. Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I (2010) Spark: Cluster computing with working sets. HotCloud 10(10–10):95

    Google Scholar 

  44. Cano I, Weimer M, Mahajan D, Curino C, Fumarola GM (2016) Towards geo-distributed machine learning. arXiv preprint arXiv:1603.09035

  45. Hsieh K, Harlap A, Vijaykumar N, Konomis D, Ganger GR, Gibbons PB, Mutlu O (2017) Gaia: Geo-distributed machine learning approaching lan speeds. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17) pp. 629–647

  46. Deng Y, Lyu F, Ren J, Chen Y-C, Yang P, Zhou Y, Zhang Y (2021) Fair: Quality-aware federated learning with precise user incentive and model aggregation. In IEEE INFOCOM 2021-IEEE Conference on Computer Communications pp. 1–10

  47. Anh TT, Luong NC, Niyato D, Kim DI, Wang L-C (2019) Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach. IEEE Wireless Communications Letters

  48. Deng Y, Lyu F, Ren J, Zhang Y, Zhou Y, Zhang Y, Yang Y (2021) Share: Shaping data distribution at edge for communication-efficient hierarchical federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS) pp. 24–34

  49. Karimireddy SP, Kale S, Mohri M, Reddi S, Stich S, Suresh AT (2020) SCAFFOLD: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning pp. 5132–5143

  50. Luping W, Wei W, Bo L (2019) CMFL: Mitigating communication overhead for federated learning. In 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS) pp. 954–964

  51. Song M, Wang Z, Zhang Z, Song Y, Wang Q, Ren J, Qi H (2020) Analyzing user-level privacy attack against federated learning. IEEE J Sel Areas Commun 38(10):2430–2444

    Article  Google Scholar 

  52. Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V (2020) How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics pp. 2938–2948

  53. Yu Z, Hu J, Min G, Zhao Z, Miao W, Hossain MS (2021) Mobility-aware proactive edge caching for connected vehicles using federated learning. IEEE Trans Intell Transp Syst 22(8):5341–5351

    Article  Google Scholar 

  54. Jalalirad A, Scavuzzo M, Capota C, Sprague M (2019) A simple and efficient federated recommender system. In Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies pp. 53–58

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Shuang Gu or Feng Lyu.

Ethics declarations

Conflict Statement

We do not have any conflict of interest or competing interest for this submission.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deng, Y., Gu, S., Jiao, C. et al. Making resource adaptive to federated learning with COTS mobile devices. Peer-to-Peer Netw. Appl. 15, 1214–1231 (2022). https://doi.org/10.1007/s12083-021-01284-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-021-01284-2

Keywords

Navigation