Abstract
Road traffic monitoring is one of the essential components in data analysis for urban air pollution prevention. In road traffic monitoring, background subtraction is a critical approach where moving objects are extracted via facilitating motion information of interest to static surroundings, known as backgrounds. To work with various contextual dynamics of nature scenes, supervised models of background subtraction aim to solve a gradient-based optimization problem on multi-modal sequences of videos by training a convolutional neural network. As video datasets are scaling up, distributing the model learning on multiple processing elements is a pivotal technique to leverage the computational power among various devices. However, one of major challenges in distributed machine learning is communication overhead.
This paper introduces a new communication-efficient distributed framework for background subtraction (CEDFrame), alleviating the communication overhead in distributed training with video data. The new framework utilizes event-triggered communication on a ring topology among workers and the Partitioned Globally Address Space (PGAS) paradigm for asynchronous computation. Through the new framework, we investigate how training a background subtraction tolerates the trade-offs between communication avoidance and accuracy in model learning. The experimental results on NVIDIA DGX-2 using the CDnet-2014 dataset show that the new framework can reduce the communication overhead by at least 94.71% while having a negligible decrement in testing accuracy (at most 2.68%).
Supported by EEA grants (project HAPADS) and Research Council of Norway (projects eX3 and DAO). The evaluation was partly performed on resources provided by UNINETT Sigma2 (project NN9342K).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)
Bachan, J., et al.: UPC++: a high-performance communication framework for asynchronous computation. In: IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 963–973 (2019)
Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. 52(4), 1–43 (2019)
Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: An overview. Comput. Sci. Rev. 11–12, 31–66 (2014)
Bouwmans, T., Javed, S., Sultana, M., Jung, S.K.: Deep neural network concepts for background subtraction: a systematic review and comparative evaluation. Neural Netw. 117, 8–66 (2019)
Chen, C., Wang, W., Li, B.: Round-robin synchronization: mitigating communication bottlenecks in parameter servers. In: IEEE Conference on Computer Communications, pp. 532–540 (2019)
Chen, J., Monga, R., Bengio, S., Jozefowicz, R.: Revisiting distributed synchronous SGD. In: International Conference on Learning Representations Workshop Track (2016). https://arxiv.org/abs/1604.00981
Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 1223–1231 (2012)
Ghosh, S., Gupta, V.: EventGraD: event-triggered communication in parallel stochastic gradient descent. In: 2020 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), pp. 1–8 (2020)
Grishchenko, D., Iutzeler, F., Malick, J., Amini, M.: Distributed learning with sparse communications by identification. SIAM J. Math. Data Sci. 3(2), 715–735 (2021)
Ha, S.V., Nguyen, C.T., Phan, H.N., Chung, N.M., Ha, P.H.: CDN-MEDAL: two-stage density and difference approximation framework for motion analysis. CoRR abs/2106.03776 (2021). https://arxiv.org/abs/2106.03776
Ho, Q., et al.: More effective distributed ml via a stale synchronous parallel parameter server. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 1223–1231 (2013)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017)
Kalsotra, R., Arora, S.: A comprehensive survey of video datasets for background subtraction. IEEE Access 7, 59143–59171 (2019)
Krizanc, R., Saarimaki, A.: Bulk synchronous parallel: practical experience with a model for parallel computing. In: Proceedings of the Conference on Parallel Architectures and Compilation Technique, pp. 208–217 (1996)
LeCun, Y., Chopra, S., Hadsell, R., Huang, F.J., et al.: A tutorial on energy-based learning. In: Predicting Structured Data. MIT Press (2006)
Lian, X., Zhang, C., Zhang, H., Hsieh, C.J., Zhang, W., Liu, J.: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 5336–5346 (2017)
Lian, X., Zhang, W., Zhang, C., Liu, J.: Asynchronous decentralized parallel stochastic gradient descent. In: Proceedings of the International Conference on Machine Learning, vol. 80, pp. 3043–3052 (2018)
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: The International Conference on Learning Representations (2018)
Shi, S., Wang, Q., Chu, X.: Performance modeling and evaluation of distributed deep learning frameworks on GPUs. In: 2018 IEEE 4th International Conference on Big Data Intelligence and Computing, pp. 949–957 (2018)
Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proceedings of Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252 (1999)
Tang, Z., Shi, S., Chu, X., Wang, W., Li, B.: Communication-efficient distributed deep learning: a comprehensive survey. CoRR abs/2003.06307 (2020). https://arxiv.org/abs/2003.06307
Wang, Y., Jodoin, P.M., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: CDNet 2014: an expanded change detection benchmark dataset. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 393–400 (2014)
Zinkevich, M.A., Weimer, M., Smola, A., Li, L.: Parallelized stochastic gradient descent. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 2595–2603 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Phan, H.N., Ha, S.VU., Ha, P.H. (2022). Towards Communication-Efficient Distributed Background Subtraction. In: Szczerbicki, E., Wojtkiewicz, K., Nguyen, S.V., Pietranik, M., Krótkiewicz, M. (eds) Recent Challenges in Intelligent Information and Database Systems. ACIIDS 2022. Communications in Computer and Information Science, vol 1716. Springer, Singapore. https://doi.org/10.1007/978-981-19-8234-7_38
Download citation
DOI: https://doi.org/10.1007/978-981-19-8234-7_38
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-8233-0
Online ISBN: 978-981-19-8234-7
eBook Packages: Computer ScienceComputer Science (R0)