Skip to main content

Towards Communication-Efficient Distributed Background Subtraction

  • Conference paper
  • First Online:
Recent Challenges in Intelligent Information and Database Systems (ACIIDS 2022)

Abstract

Road traffic monitoring is one of the essential components in data analysis for urban air pollution prevention. In road traffic monitoring, background subtraction is a critical approach where moving objects are extracted via facilitating motion information of interest to static surroundings, known as backgrounds. To work with various contextual dynamics of nature scenes, supervised models of background subtraction aim to solve a gradient-based optimization problem on multi-modal sequences of videos by training a convolutional neural network. As video datasets are scaling up, distributing the model learning on multiple processing elements is a pivotal technique to leverage the computational power among various devices. However, one of major challenges in distributed machine learning is communication overhead.

This paper introduces a new communication-efficient distributed framework for background subtraction (CEDFrame), alleviating the communication overhead in distributed training with video data. The new framework utilizes event-triggered communication on a ring topology among workers and the Partitioned Globally Address Space (PGAS) paradigm for asynchronous computation. Through the new framework, we investigate how training a background subtraction tolerates the trade-offs between communication avoidance and accuracy in model learning. The experimental results on NVIDIA DGX-2 using the CDnet-2014 dataset show that the new framework can reduce the communication overhead by at least 94.71% while having a negligible decrement in testing accuracy (at most 2.68%).

Supported by EEA grants (project HAPADS) and Research Council of Norway (projects eX3 and DAO). The evaluation was partly performed on resources provided by UNINETT Sigma2 (project NN9342K).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)

    Article  Google Scholar 

  2. Bachan, J., et al.: UPC++: a high-performance communication framework for asynchronous computation. In: IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 963–973 (2019)

    Google Scholar 

  3. Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. 52(4), 1–43 (2019)

    Article  Google Scholar 

  4. Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: An overview. Comput. Sci. Rev. 11–12, 31–66 (2014)

    Article  MATH  Google Scholar 

  5. Bouwmans, T., Javed, S., Sultana, M., Jung, S.K.: Deep neural network concepts for background subtraction: a systematic review and comparative evaluation. Neural Netw. 117, 8–66 (2019)

    Article  Google Scholar 

  6. Chen, C., Wang, W., Li, B.: Round-robin synchronization: mitigating communication bottlenecks in parameter servers. In: IEEE Conference on Computer Communications, pp. 532–540 (2019)

    Google Scholar 

  7. Chen, J., Monga, R., Bengio, S., Jozefowicz, R.: Revisiting distributed synchronous SGD. In: International Conference on Learning Representations Workshop Track (2016). https://arxiv.org/abs/1604.00981

  8. Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 1223–1231 (2012)

    Google Scholar 

  9. Ghosh, S., Gupta, V.: EventGraD: event-triggered communication in parallel stochastic gradient descent. In: 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), pp. 1–8 (2020)

    Google Scholar 

  10. Grishchenko, D., Iutzeler, F., Malick, J., Amini, M.: Distributed learning with sparse communications by identification. SIAM J. Math. Data Sci. 3(2), 715–735 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  11. Ha, S.V., Nguyen, C.T., Phan, H.N., Chung, N.M., Ha, P.H.: CDN-MEDAL: two-stage density and difference approximation framework for motion analysis. CoRR abs/2106.03776 (2021). https://arxiv.org/abs/2106.03776

  12. Ho, Q., et al.: More effective distributed ml via a stale synchronous parallel parameter server. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 1223–1231 (2013)

    Google Scholar 

  13. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017)

    Google Scholar 

  14. Kalsotra, R., Arora, S.: A comprehensive survey of video datasets for background subtraction. IEEE Access 7, 59143–59171 (2019)

    Article  Google Scholar 

  15. Krizanc, R., Saarimaki, A.: Bulk synchronous parallel: practical experience with a model for parallel computing. In: Proceedings of the Conference on Parallel Architectures and Compilation Technique, pp. 208–217 (1996)

    Google Scholar 

  16. LeCun, Y., Chopra, S., Hadsell, R., Huang, F.J., et al.: A tutorial on energy-based learning. In: Predicting Structured Data. MIT Press (2006)

    Google Scholar 

  17. Lian, X., Zhang, C., Zhang, H., Hsieh, C.J., Zhang, W., Liu, J.: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 5336–5346 (2017)

    Google Scholar 

  18. Lian, X., Zhang, W., Zhang, C., Liu, J.: Asynchronous decentralized parallel stochastic gradient descent. In: Proceedings of the International Conference on Machine Learning, vol. 80, pp. 3043–3052 (2018)

    Google Scholar 

  19. Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: The International Conference on Learning Representations (2018)

    Google Scholar 

  20. Shi, S., Wang, Q., Chu, X.: Performance modeling and evaluation of distributed deep learning frameworks on GPUs. In: 2018 IEEE 4th International Conference on Big Data Intelligence and Computing, pp. 949–957 (2018)

    Google Scholar 

  21. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proceedings of Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252 (1999)

    Google Scholar 

  22. Tang, Z., Shi, S., Chu, X., Wang, W., Li, B.: Communication-efficient distributed deep learning: a comprehensive survey. CoRR abs/2003.06307 (2020). https://arxiv.org/abs/2003.06307

  23. Wang, Y., Jodoin, P.M., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: CDNet 2014: an expanded change detection benchmark dataset. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 393–400 (2014)

    Google Scholar 

  24. Zinkevich, M.A., Weimer, M., Smola, A., Li, L.: Parallelized stochastic gradient descent. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 2595–2603 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Phuong Hoai Ha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Phan, H.N., Ha, S.VU., Ha, P.H. (2022). Towards Communication-Efficient Distributed Background Subtraction. In: Szczerbicki, E., Wojtkiewicz, K., Nguyen, S.V., Pietranik, M., Krótkiewicz, M. (eds) Recent Challenges in Intelligent Information and Database Systems. ACIIDS 2022. Communications in Computer and Information Science, vol 1716. Springer, Singapore. https://doi.org/10.1007/978-981-19-8234-7_38

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-8234-7_38

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-8233-0

  • Online ISBN: 978-981-19-8234-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics