Abstract
Background subtraction, although being a very well-established field, has required significant research efforts to tackle unsolved challenges and to accelerate the progress toward generalized moving object detection framework for real-time applications. The performance of subsequent steps in higher level video analytical tasks totally depends on the performance of background subtraction. Recent years have witnessed a remarkable performance of deep neural networks for background subtraction. The deep leaning has paved the way for improving background subtraction to counter the major challenges in this area. Also, the fusion of multiple features leads to the improvement of conventional background subtraction methods. In this context, we provide the comprehensive review of conventional as well as recent developments in background subtraction to analyze the success and current challenges in this field. Firstly, this paper introduces the overview of background subtraction process along with challenges and benchmark video datasets released for evaluation purpose. Then, we briefly summarize the background subtraction methods and report a comparison of the most promising state-of-the-art algorithms. Moreover, we comprehensively investigate some of the recent methods in order to find out how they have achieved their reported performances. Finally, we conclude with the shortcomings in the current developments and outline the promising research directions for background subtraction.
Similar content being viewed by others
Availability of data and material
The datasets analyzed during the current study are available from the public data repository at the website of http://www.changedetection.net/
Code availability
Software application – BGSLibrary (https://github.com/andrewssobral/bgslibrary).
References
del Postigo, C.G., Torres, J., Menéndez, J.M.: Vacant parking area estimation through background subtraction and transience map analysis. IET Intel. Transp. Syst. 9(9), 835–841 (2015)
Muniruzzaman, S., Haque, N., Rahman, F., Siam, M., Musabbir, R., Hadiuzzaman, M., Hossain, S.: Deterministic algorithm for traffic detection in free-flow and congestion using video sensor. J. Built. Environ. Technol. Eng. 1, 111–130 (2016)
Penciuc, D., El Baf, F., Bouwmans, T.: Comparison of background subtraction methods for an interactive learning space. NETTIES 2006 (2006)
Zhang, X., Tian, Y., Huang, T., Dong, S., Gao, W.: Optimizing the hierarchical prediction and coding in HEVC for surveillance and conference videos with background modeling. IEEE Trans. Image Process. 23(10), 4511–4526 (2014)
Bansod, S.D., Nandedkar, A.V.: Crowd anomaly detection and localization using histogram of magnitude and momentum. Vis. Comput. 36(3), 609–620 (2020)
Mukherjee, S., Gil, S., Ray, N.: Unique people count from monocular videos. Vis. Comput. 31(10), 1405–1417 (2015)
Huang, H., Fang, X., Ye, Y., Zhang, S., Rosin, P.L.: Practical automatic background substitution for live video. Comput. Vis. Media 3(3), 273–284 (2017)
Tamás, B.: Detecting and analyzing rowing motion in videos. In: BME Scientific Student Conference (pp. 1–29) (2016)
Zivkovic, Z., Van Der Heijden, F.: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn. Lett. 27(7), 773–780 (2006)
Huang, W., Zeng, Q., Chen, M.: Motion characteristics estimation of animals in video surveillance. In: Proceedings of the 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC) (pp. 1098–1102). IEEE (2017)
Giraldo-Zuluaga, J. H., Salazar, A., Gomez, A., Diaz-Pulido, A.: Automatic recognition of mammal genera on camera-trap images using multi-layer robust principal component analysis and mixture neural networks (2017)
Yang, Y., Yang, J., Liu, L., Wu, N.: High-speed target tracking system based on a hierarchical parallel vision processor and gray-level LBP algorithm. IEEE Trans Syst Man Cybern Syst 47(6), 950–964 (2016)
Hadi, R.A., George, L.E., Mohammed, M.J.: A computationally economic novel approach for real-time moving multi-vehicle detection and tracking toward efficient traffic surveillance. Arab J Sci Eng 42(2), 817–831 (2017)
Choudhury, S.K., Sa, P.K., Bakshi, S., Majhi, B.: An evaluation of background subtraction for object detection vis-a-vis mitigating challenging scenarios. IEEE Access 4, 6133–6150 (2016)
Chapel, M.N., Bouwmans, T.: Moving objects detection with a moving camera: a comprehensive review. Comput. Sci. Rev. 38, 100310 (2020)
Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: an overview. Comput. Sci. Rev. 11, 31–66 (2014)
Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Understand. 122, 4–21 (2014)
Maddalena, L., Petrosino, A.: Background subtraction for moving object detection in RGBD data: A survey. J. Imag. 4(5), 71 (2018)
Kalsotra, R., Arora, S.: A comprehensive survey of video datasets for background subtraction. IEEE Access 7, 59143–59171 (2019)
Bouwmans, T., Silva, C., Marghes, C., Zitouni, M.S., Bhaskar, H., Frelicot, C.: On the role and the importance of features for background modeling and foreground detection. Comput. Sci. Rev. 28, 26–91 (2018)
Bouwmans, T., Javed, S., Sultana, M., Jung, S.K.: Deep neural network concepts for background subtraction: A systematic review and comparative evaluation. Neural Netw. 117, 8–66 (2019)
Bouwmans, T., Garcia-Garcia, B.: Background subtraction in real applications: challenges, current models and future directions (2019)
Kim, H., Sakamoto, R., Kitahara, I., Toriyama, T., Kogure, K.: Robust foreground extraction technique using Gaussian family model and multiple thresholds. In: Asian Conference on Computer Vision (pp. 758–768). Springer, Berlin (2007).
Allili, M. S., Bouguila, N., Ziou, D.: A robust video foreground segmentation by using generalized gaussian mixture modeling. In: Fourth Canadian Conference on Computer and Robot Vision (CRV'07) (pp. 503–509). IEEE (2007)
Lin, H. H., Liu, T. L., Chuang, J. H.: A probabilistic SVM approach for background scene initialization. In: Proceedings of the International Conference on Image Processing (Vol. 3, pp. 893–896). IEEE (2002)
Han, B., Davis, L.S.: Density-based multifeature background subtraction with support vector machine. IEEE Trans. Pattern Anal. Mach. Intel. 34(5), 1017–1023 (2011)
Maddalena, L., Petrosino, A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans Image Process 17(7), 1168–1177 (2008)
Maddalena, L., Petrosino, A.: Self-organizing background subtraction using color and depth data. Multimedia Tools Appl. 78(9), 11927–11948 (2019)
Kim, W., Kim, C.: Background subtraction for dynamic texture scenes using fuzzy color histograms. IEEE Signal Process. Lett. 19(3), 127–130 (2012)
Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM (JACM) 58(3), 1–37 (2011)
Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image process. 20(6), 1709–1724 (2010)
Hofmann, M., Tiefenbacher, P., Rigoll, G.: Background segmentation with feedback: the pixel-based adaptive segmenter. In: Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 38–43). IEEE (2012)
Braham, M., Van Droogenbroeck, M.: Deep background subtraction with scene-specific convolutional neural networks. In: Proceedings of the 2016 International Conference on Systems, Signals and Image Processing (IWSSIP) (pp. 1–4). IEEE (2016)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Vaswani, N., Bouwmans, T., Javed, S., Narayanamurthy, P.: Robust subspace learning: robust PCA, robust subspace tracking, and robust subspace recovery. IEEE Signal Process. Mag. 35(4), 32–55 (2018)
Bouwmans, T., Sobral, A., Javed, S., Jung, S.K., Zahzah, E.H.: Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset. Comput. Sci. Rev. 23, 1–71 (2017)
Komagal, E., Yogameena, B.: Foreground segmentation with PTZ camera: a survey. Multimedia Tools Appl. 77(17), 22489–22542 (2018)
Kim, W., Jung, C.: Illumination-invariant background subtraction: Comparative review, models, and prospects. IEEE Access 5, 8369–8384 (2017)
Bouwmans, T., Maddalena, L., Petrosino, A.: Scene background initialization: A taxonomy. Pattern Recogn. Lett. 96, 3–11 (2017)
Jodoin, P.M., Maddalena, L., Petrosino, A., Wang, Y.: Extensive benchmark and survey of modeling methods for scene background initialization. IEEE Trans. Image Process. 26(11), 5244–5256 (2017)
El Baf, F., Bouwmans, T., Vachon, B.: A fuzzy approach for background subtraction. In: Proceedings of the 2008 15th IEEE International Conference on Image Processing (pp. 2648–2651). IEEE (2008)
Lee, D. S.: Improved adaptive mixture learning for robust video background modeling. In: MVA (pp. 443–446) (2002)
Pnevmatikakis, A., Polymenakos, L.: 2D person tracking using Kalman filtering and adaptive background learning in a feedback loop. In: Proceedings of the International Evaluation Workshop on Classification of Events, Activities and Relationships (pp. 151–160). Springer, Berlin (2006)
Magee, D.R.: Tracking multiple vehicles using foreground, background and motion models. Image Vis. Comput. 22(2), 143–155 (2004)
Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. In: Proceedings of the Seventh IEEE International Conference on Computer Vision (Vol. 1, pp. 255–261). IEEE (1999)
Wang, Y., Jodoin, P. M., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: CDnet 2014: an expanded change detection benchmark dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 387–394) (2014)
Vacavant, A., Chateau, T., Wilhelm, A., Lequièvre, L.: A benchmark dataset for outdoor foreground/background extraction. In: Asian Conference on Computer Vision (pp. 291–300). Springer, Berlin (2012)
Cuevas, C., Yáñez, E.M., García, N.: Labeled dataset for integral evaluation of moving object detection algorithms: LASIESTA. Comput. Vis. Image Understand. 152, 103–117 (2016)
Li, C., Wang, X., Zhang, L., Tang, J., Wu, H., Lin, L.: Weighted low-rank decomposition for robust grayscale-thermal foreground detection. IEEE Trans. Circuits Syst. Video Technol. 27(4), 725–738 (2017)
Maddalena, L., Petrosino, A.: Towards benchmarking scene background initialization. In: International Conference on Image Analysis and Processing (pp. 469–476). Springer, Cham (2015)
Roy, S. D., Bhowmik, M. K. (2020). Annotation and benchmarking of a video dataset under degraded complex atmospheric conditions and its visibility enhancement analysis for moving object detection. IEEE Trans. Circuits Syst. Video Technol.
Sultana, M., Jung, S. K.: Illumination invariant foreground object segmentation using ForeGANs (2019)
Airport Ground Video Surveillance Benchmark. http://www.agvs-caac.com/. Accessed 18 Aug 2020
Moyà-Alcover, G., Elgammal, A., Jaume-i-Capó, A., Varona, J.: Modeling depth for nonparametric foreground segmentation using RGBD devices. Pattern Recogn. Lett. 96, 76–85 (2017)
Camplani, M., Maddalena, L., Alcover, G. M., Petrosino, A., Salgado, L.: A benchmarking framework for background subtraction in RGBD videos. In: International Conference on Image Analysis and Processing (pp. 219–229). Springer, Cham (2017)
Li, S., Florencio, D., Li, W., Zhao, Y., Cook, C.: A fusion framework for camouflaged moving foreground detection in the wavelet domain. IEEE Trans. Image Process. 27(8), 3918–3930 (2018)
Yao, G., Lei, T., Zhong, J., Jiang, P., Jia, W.: Comparative evaluation of background subtraction algorithms in remote scene videos captured by MWIR sensors. Sensors 17(9), 1945 (2017)
Bloisi, D. D., Iocchi, L., Pennisi, A., Tombolini, L.: ARGOS-Venice boat classification. In: Proceedings of the 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1–6). IEEE (2015)
Camplani, M., Salgado, L.: Background foreground segmentation with RGB-D Kinect data: an efficient combination of classifiers. J. Vis. Commun. Image Represen. 25(1), 122–136 (2014)
Benezeth, Y., Sidibé, D., Thomas, J. B.: Background subtraction with multispectral video sequences (2014)
Wu, Z., Fuller, N., Theriault, D., Betke, M.: A thermal infrared video benchmark for visual analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 201–208) (2014)
Abdelhedi, S., Wali, A., Alimi, A. M.: Toward a kindergarten video surveillance system (kvss) using background subtraction based type-2 fgmm model. In: Proceedings of the 2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR) (pp. 440–446). IEEE (2014)
Fernandez-Sanchez, E.J., Diaz, J., Ros, E.: Background subtraction based on color and depth using active sensors. Sensors 13(7), 8895–8915 (2013)
Fernandez-Sanchez, E.J., Rubio, L., Diaz, J., Ros, E.: Background subtraction model based on color and depth cues. Mach. Vis. Appl. 25(5), 1211–1225 (2014)
Akula, A., Ghosh, R., Kumar, S., Sardana, H.K.: Moving target detection in thermal infrared imagery using spatiotemporal information. JOSA A 30(8), 1492–1501 (2013)
Gallego Vila, J.: Parametric region-based foreground segmentation in planar and multi-view sequences (2013)
Goyette, N., Jodoin, P. M., Porikli, F., Konrad, J., Ishwar, P.: Change detection net: a new change detection benchmark dataset. In: Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 1–8). IEEE (2012)
Brutzer, S., Höferlin, B., Heidemann, G.: Evaluation of background subtraction techniques for video surveillance. In: CVPR 2011 (pp. 1937–1944). IEEE (2011)
Singh, S., Velastin, S. A., Ragheb, H.: Muhavi: a multicamera human action video dataset for the evaluation of action recognition methods. In: Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (pp. 48–55). IEEE (2010)
Tiburzi, F., Escudero, M., Bescós, J., Martínez, J. M.: A ground truth for motion-based video-object segmentation. In: Proceedings of the 2008 15th IEEE International Conference on Image Processing (pp. 17–20). IEEE (2008)
Laboratory for Image and Media Understanding. http://limu.ait.kyushu-u.ac.jp/dataset/en/. Accessed 5 Aug 2020
Mahadevan, V., Vasconcelos, N.: Background subtraction in highly dynamic scenes. In: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–6). IEEE (2008)
SZTAKI Surveillance Benchmark Set. http://web.eee.sztaki.hu/~bcsaba/FgShBenchmark.htm. Accessed 5 Aug 2020
Davis, J.W., Sharma, V.: Background-subtraction using contour-based fusion of thermal and visible imagery. Comput. Vis. Image Understand. 106(2–3), 162–182 (2007)
Branch, H. O. S. D.: Imagery library for intelligent detection systems (i-lids). In: Proceedings of the 2006 IET Conference on Crime and Security (pp. 445–448). IET (2006)
Calderara, S., Melli, R., Prati, A., Cucchiara, R.: Reliable background suppression for complex scenes. In: Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks (pp. 211–214) (2006)
Nghiem, A. T., Bremond, F., Thonnat, M., Valentin, V.: ETISEO, performance evaluation for video surveillance systems. In: Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance (pp. 476–481). IEEE (2007)
Sheikh, Y., Shah, M.: Bayesian modeling of dynamic scenes for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 27(11), 1778–1792 (2005)
Li, L., Huang, W., Gu, I.Y.H., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)
Davis, J. W., Keck, M. A.: A two-stage template approach to person detection in thermal imagery. In: Proceedings of the 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05)-Volume 1 (Vol. 1, pp. 364–369). IEEE (2005)
Prati, A., Mikic, I., Trivedi, M.M., Cucchiara, R.: Detecting moving shadows: algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 25(7), 918–923 (2003)
Young, D. P., Ferryman, J. M.: Pets metrics: on-line performance evaluation service. In: Proceedings of the 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (pp. 317–324). IEEE (2005)
El Baf, F., Bouwmans, T., Vachon, B.: Comparison of background subtraction methods for a multimedia application. In: Proceedings of the 2007 14th International Workshop on Systems, Signals and Image Processing and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services (pp. 385–388). IEEE (2007)
Caltech Camera Traps. https://beerys.github.io/CaltechCameraTraps/. Accessed 8 Aug 2020
Underwater Change Detection. http://underwaterchangedetection.eu/. Accesses 10 Aug 2020
Kavasidis, I., Palazzo, S., Di Salvo, R., Giordano, D., Spampinato, C.: An innovative web-based collaborative platform for video annotation. Multimedia Tools Appl. 70(1), 413–432 (2014)
Burgos-Artizzu, X. P., Dollár, P., Lin, D., Anderson, D. J., Perona, P.: Social behavior recognition in continuous video. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1322–1329). IEEE (2012)
Balcilar, M., Amasyali, M.F., Sonmez, A.C.: Moving object detection using Lab2000HL color space with spatial and temporal smoothing. Appl. Math. Inform. Sci. 8(4), 1755 (2014)
Romero, J.D., Lado, M.J., Mendez, A.J.: A background modeling and foreground detection algorithm using scaling coefficients defined with a color model called lightness-red-green-blue. IEEE Trans. Image Process. 27(3), 1243–1258 (2017)
Suhr, J.K., Jung, H.G., Li, G., Kim, J.: Mixture of Gaussians-based background subtraction for Bayer-pattern image sequences. IEEE Trans. Circuits Syst. Video Technol. 21(3), 365–370 (2010)
Heikkila, M., Pietikainen, M.: A texture-based method for modeling the background and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intel. 28(4), 657–662 (2006)
Du, X., Qin, G.: Foreground detection in surveillance videos via a hybrid local texture based method. Int. J. Smart Sens. Intell. Syst. 9, 4 (2016)
Vasamsetti, S., Mittal, N., Neelapu, B.C., Sardana, H.K.: 3D local spatio-temporal ternary patterns for moving object detection in complex scenes. Cogn. Comput. 11(1), 18–30 (2019)
Rivera, A.R., Murshed, M., Kim, J., Chae, O.: Background modeling through statistical edge-segment distributions. IEEE Trans. Circuits Syst. Video Technol. 23(8), 1375–1387 (2013)
Roy, K., Kim, J., Iqbal, M. T. B., Makhmudkhujaev, F., Ryu, B., Chae, O.: An adaptive fusion scheme of color and edge features for background subtraction. In: Proceedings of the 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1–6). IEEE (2017)
Tang, P., Gao, L., Liu, Z.: Salient moving object detection using stochastic approach filtering. In: Fourth International Conference on Image and Graphics (ICIG 2007) (pp. 530–535). IEEE (2007)
Dou, J., Li, J.: Modeling the background and detecting moving objects based on Sift flow. Optik 125(1), 435–440 (2014)
Huang, J., Zou, W., Zhu, J., Zhu, Z.: Optical flow based real-time moving object detection in unconstrained scenes (2018)
Camplani, M., del Blanco, C.R., Salgado, L., Jaureguizar, F., García, N.: Advanced background modeling with RGB-D sensors through classifiers combination and inter-frame foreground prediction. Mach. Vis. Appl. 25(5), 1197–1210 (2014)
Hati, K.K., Sa, P.K., Majhi, B.: Intensity range based background subtraction for effective object detection. IEEE Signal Process. Lett. 20(8), 759–762 (2013)
Jang, D., Jin, X., Choi, Y., Kim, T.: Background subtraction based on local orientation histogram. In: Asia-Pacific Conference on Computer Human Interaction (pp. 222–231). Springer, Berlin, Heidelberg (2008)
Chiranjeevi, P., Sengupta, S.: Detection of moving objects using multi-channel kernel fuzzy correlogram based background subtraction. IEEE Trans. Cybern. 44(6), 870–881 (2013)
Chiranjeevi, P., Sengupta, S.: Robust detection of moving objects in video sequences through rough set theory framework. Image Vis. Comput. 30(11), 829–842 (2012)
Zhao, P., Zhao, Y., Cai, A.: Hierarchical codebook background model using haar-like features. In: Proceedings of the 2012 3rd IEEE International Conference on Network Infrastructure and Digital Content (pp. 438–442). IEEE (2012)
López-Rubio, F.J., López-Rubio, E.: Features for stochastic approximation based foreground detection. Comput. Vis. Image Understand. 133, 30–50 (2015)
Narayana, M., Hanson, A., Learned-Miller, E.G.: Background subtraction: separating the modeling and the inference. Mach. Vis. Appl. 25(5), 1163–1174 (2014)
Dey, B., Kundu, M.K.: Enhanced macroblock features for dynamic background modeling in H. 264/AVC video encoded at low bitrate. IEEE Trans. Circuits Syst. Video Technol. 28(3), 616–625 (2016)
Han, G., Wang, J., Cai, X.: Background subtraction based on three-dimensional discrete wavelet transform. Sensors 16(4), 456 (2016)
Shen, Y., Hu, W., Yang, M., Liu, J., Wei, B., Lucey, S., Chou, C.T.: Real-time and robust compressive background subtraction for embedded camera networks. IEEE Trans. Mobile Comput. 15(2), 406–418 (2015)
Chen, Y., Wang, J., Li, J., Lu, H.: Multiple features based shared models for background subtraction. In: Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP) (pp. 3946–3950). IEEE (2015)
Yan, J., Wang, S., Xie, T., Yang, Y., Wang, J.: Variational Bayesian learning for background subtraction based on local fusion feature. IET Comput. Vis. 10(8), 884–893 (2016)
Chao, G., Ying, W., Xiangyang, W.: Multi-feature robust principal component analysis for video moving object segmentation. J. Image Graph. 18(9), 1124–1132 (2013)
Javed, S., Oh, S.H., Bouwmans, T., Jung, S.K.: Robust background subtraction to global illumination changes via multiple features-based online robust principal components analysis with Markov random field. J. Elect. Imag. 24(4), 043011 (2015)
Giraldo-Zuluaga, J.H., Salazar, A., Gomez, A., Diaz-Pulido, A.: Camera-trap images segmentation using multi-layer robust principal component analysis. Vis. Comput. 35(3), 335–347 (2019)
Singh, R.P., Sharma, P.: Instance-vote-based motion detection using spatially extended hybrid feature space. Vis. Comput. 1, 17 (2020)
Minematsu, T., Shimada, A., Uchiyama, H., Taniguchi, R.I.: Analytics of deep neural network-based background subtraction. J. Imag. 4(6), 78 (2018)
Zhang, Y., Li, X., Zhang, Z., Wu, F., Zhao, L.: Deep learning driven blockwise moving object detection with binary scene modeling. Neurocomputing 168, 454–463 (2015)
García-González, J., Ortiz-de-Lazcano-Lobato, J. M., Luque-Baena, R. M., Molina-Cabello, M. A., López-Rubio, E.: Background modeling for video sequences by stacked denoising autoencoders. In: Proceedings of the Conference of the Spanish Association for Artificial Intelligence (pp. 341–350). Springer, Cham (2018)
García-González, J., Ortiz-de-Lazcano-Lobato, J.M., Luque-Baena, R.M., Molina-Cabello, M.A., López-Rubio, E.: Foreground detection by probabilistic modeling of the features discovered by stacked denoising autoencoders in noisy video sequences. Pattern Recogn. Lett. 125, 481–487 (2019)
Nguyen, T.P., Pham, C.C., Ha, S.V.U., Jeon, J.W.: Change detection by training a triplet network for motion feature extraction. IEEE Trans. Circuits Syst. Video Technol. 29(2), 433–446 (2018)
Shafiee, M. J., Siva, P., Fieguth, P., Wong, A.: Embedded motion detection via neural response mixture background modeling. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 837–844). IEEE (2016)
Shafiee, M.J., Siva, P., Fieguth, P., Wong, A.: Real-time embedded motion detection via neural response mixture modeling. J. Signal Process. Syst. 90(6), 931–946 (2018)
Shafiee, M.J., Siva, P., Wong, A.: Stochasticnet: Forming deep neural networks via stochastic connectivity. IEEE Access 4, 1915–1924 (2016)
Lee, B., Hedley, M.: Background estimation for video surveillance. In: Image and Vision Computing New Zealand 2002, (IVCNZ) (pp. 315–320) (2002)
Shi, P., Jones, E. G., Zhu, Q.: Median model for background subtraction in intelligent transportation system. In: Image Processing: Algorithms and Systems III (Vol. 5298, pp. 168–176). International Society for Optics and Photonics (2004)
Wang, L., Tan, T., Ning, H., Hu, W.: Silhouette analysis-based gait recognition for human identification. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1505–1518 (2003)
Zhang, S., Yao, H., Liu, S.: Dynamic background subtraction based on local dependency histogram. Int. J. Pattern Recogn. Artif. Intell. 23(07), 1397–1419 (2009)
Kuo, C. M., Chang, W. H., Wang, S. B., Liu, C. S.: An efficient histogram-based method for background modeling. In: Proceedings of the 2009 Fourth International Conference on Innovative Computing, Information and Control (ICICIC) (pp. 480–483). IEEE (2009)
Wren, C.R., Azarbayejani, A., Darrell, T., Pentland, A.P.: Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785 (1997)
Stauffer, C., Grimson, W. E. L.: Adaptive background mixture models for real-time tracking. In: Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149) (Vol. 2, pp. 246–252). IEEE (1999)
Lin, H.H., Chuang, J.H., Liu, T.L.: Regularized background adaptation: a novel learning rate control scheme for Gaussian mixture modeling. IEEE Trans. Image Process. 20(3), 822–836 (2010)
Zhao, X., Liu, P., Liu, J., Tang, X.: Background subtraction using semantic-based hierarchical GMM. Elect. Lett. 48(14), 825–827 (2012)
Alvar, M., Rodriguez-Calvo, A., Sanchez-Miralles, A., Arranz, A.: Mixture of merged gaussian algorithm using RTDENN. Mach. Vis. Appl. 25(5), 1133–1144 (2014)
Lee, J., Park, M.: An adaptive background subtraction method based on kernel density estimation. Sensors 12(9), 12279–12300 (2012)
Butler, D. E., Bove, V. M., Sridharan, S. (2005). Real-time adaptive foreground/background segmentation. EURASIP J. Adv. Signal Process. 2005(14), 841926.
Tao, F., Lin-sheng, L., Qi-chuan, T.: A novel adaptive motion detection based on k-means clustering. In: Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology (Vol. 3, pp. 136–140). IEEE (2010)
Xiao, M., Han, C., Kang, X.: A background reconstruction for dynamic scenes. In: Proceedings of the 2006 9th International Conference on Information Fusion (pp. 1–7). IEEE (2006)
Xiao, M., Zhang, L.: A background reconstruction algorithm based on modified basic sequential clustering. In: proceedings of the 2008 ISECS International Colloquium on Computing, Communication, Control, and Management (Vol. 1, pp. 47–51). IEEE (2008)
Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Real-time foreground–background segmentation using codebook model. Real-Time Imag. 11(3), 172–185 (2005)
Wu, M., Peng, X.: Spatio-temporal context for codebook-based dynamic background subtraction. AEU-Int. J. Elect. Commun. 64(8), 739–747 (2010)
Messelodi, S., Modena, C. M., Segata, N., Zanin, M.: A kalmanfilter based background updating algorithm robust to sharp illumination changes. In: International Conference on Image Analysis and Processing (pp. 163–170). Springer, Berlin, Heidelberg (2005)
Chang, R., Gandhi, T., Trivedi, M. M.: Vision modules for a multi-sensory bridge monitoring approach. In: Proceedings. The 7th International IEEE Conference on Intelligent Transportation Systems (IEEE Cat. No. 04TH8749) (pp. 971–976). IEEE (2004)
Yan, L.F., Tu, X.Y.: Background modeling based on Chebyshev approximation. J. Syst. Simul. 20(4), 944–946 (2008)
Karman, K. P.: Moving object recognition using an adaptive background memory. Proc. Time Vary. Image Process. (1990).
Zhong, J.: Segmenting foreground objects from a dynamic textured background via a robust kalman filter. In: Proceedings Ninth IEEE International Conference on Computer Vision (pp. 44–50). IEEE. (2003)
Gao, D., Zhou, J.: Adaptive background estimation for real-time traffic monitoring. In: ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No. 01TH8585) (pp. 330–333). IEEE (2001)
Scott, J., Pusateri, M. A., Cornish, D.: Kalman filter based video background estimation. In: Proceedings of the 2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009) (pp. 1–7). IEEE (2009)
Mukherjee, D., JonathanWu, Q.M.: Real-time video segmentation using student’s t mixture model. Proc. Comput. Sci. 10, 153–160 (2012)
Haines, T.S., Xiang, T.: Background subtraction with Dirichlet process mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 36(4), 670–683 (2013)
Faro, A., Giordano, D., Spampinato, C.: Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection. IEEE Trans. Intell. Transp. Syst. 12(4), 1398–1412 (2011)
Elguebaly, T., Bouguila, N.: Finite asymmetric generalized Gaussian mixture models learning for infrared object detection. Comput. Vis. Image Understand. 117(12), 1659–1671 (2013)
Lanza, A., Tombari, F., Di Stefano, L.: Accurate and efficient background subtraction by monotonic second-degree polynomial fitting. In: Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (pp. 376–383). IEEE (2010)
Ding, J., Li, M., Huang, K., Tan, T.: Modeling complex scenes for accurate moving objects segmentation. In: Asian Conference on Computer Vision (pp. 82–94). Springer, Berlin, Heidelberg (2010)
Liu, Z., Huang, K., Tan, T.: Foreground object detection using top-down information based on EM framework. IEEE Trans Image Process. 21(9), 4204–4217 (2012)
St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: Subsense: A universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2014)
St-Charles, P. L., Bilodeau, G. A., Bergevin, R.: A self-adjusting approach to change detection based on background word consensus. In: Proceedings of the 2015 IEEE winter conference on applications of computer vision (pp. 990–997). IEEE (2015)
El Baf, F., Bouwmans, T., Vachon, B.: Type-2 fuzzy mixture of Gaussians model: application to background modeling. In: Proceedings of the International Symposium on Visual Computing (pp. 772–781). Springer, Berlin, Heidelberg (2008)
El Baf, F., Bouwmans, T., Vachon, B.: Fuzzy statistical modeling of dynamic backgrounds for moving object detection in infrared videos. In: Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 60–65). IEEE (2009)
Maddalena, L., Petrosino, A.: Multivalued background/foreground separation for moving object detection. In: Proceedings of the International Workshop on Fuzzy Logic and Applications (pp. 263–270). Springer, Berlin, Heidelberg (2009)
Zhang, H., Xu, D.: Fusing color and texture features for background model. In: Proceedings of the Fuzzy Systems and Knowledge Discovery: Third International Conference, FSKD 2006, Xi’an, China, September 24–28, 2006. Proceedings 3 (pp. 887–893). Springer Berlin Heidelberg (2006)
Azab, M. M., Shedeed, H. A., Hussein, A. S.: A new technique for background modeling and subtraction for motion detection in real-time videos. In: Proceedings of the 2010 IEEE International Conference on Image Processing (pp. 3453–3456). IEEE (2010)
Porikli, F., Wren, C.: Change detection by frequency decomposition: wave-back. In: Proceedings of the Proc. of Workshop on Image Analysis for Multimedia Interactive Services (2005)
Wren, C. R., &Porikli, F.: Waviz: spectral similarity for object detection. In: Proceedings of the IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (pp. 55–61) (2005)
Tezuka, H., Nishitani, T.: A precise and stable foreground segmentation using fine-to-coarse approach in transform domain. In: Proceedings of the 2008 15th IEEE International Conference on Image Processing (pp. 2732–2735). IEEE. (2008)
Tezuka, H., Nishitani, T.: Multiresolutional Gaussian mixture model for precise and stable foreground segmentation in transform domain. IEICE Trans. Fundam. Elect. Commun. Comput. Sci. 92(3), 772–778 (2009)
Ji, Z., Wang, W., Lu, K.: Extract foreground objects based on sparse model of spatiotemporal spectrum. In: Proceedings of the 2013 IEEE International Conference on Image Processing (pp. 3441–3445). IEEE (2013)
Jalal, A.S., Singh, V.: A framework for background modelling and shadow suppression for moving object detection in complex wavelet domain. Multimedia Tools Appl. 73(2), 779–801 (2014)
Baltieri, D., Vezzani, R., Cucchiara, R.: Fast background initialization with recursive Hadamard transform. In: Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance (pp. 165–171). IEEE (2010)
Cevher, V., Sankaranarayanan, A., Duarte, M. F., Reddy, D., Baraniuk, R. G., Chellappa, R.: Compressive sensing for background subtraction. In: Proceedings of the European Conference on Computer Vision (pp. 155–168). Springer, Berlin, Heidelberg (2008)
Dikmen, M., Huang, T. S.: Robust estimation of foreground in surveillance videos by sparse error estimation. In: Proceedings of the 2008 19th International Conference on Pattern Recognition (pp. 1–4). IEEE (2008)
Huang, J., Zhang, T., Metaxas, D.: Learning with Structured Sparsity. J. Mach. Learn. Res. 12, 11 (2011)
Zhao, C., Wang, X., Cham, W.K.: Background subtraction via robust dictionary learning. EURASIP J. Image Video Process. 2011, 1–12 (2011)
Huang, X., Wu, F., Huang, P.: Moving-object detection based on sparse representation and dictionary learning. Aasri Proced. 1, 492–497 (2012)
Huang, J., Huang, X., Metaxas, D.: Learning with dynamic group sparsity. In: Proceedings of the 2009 IEEE 12th International Conference on Computer Vision (pp. 64–71). IEEE (2009)
Oliver, N.M., Rosario, B., Pentland, A.P.: A Bayesian computer vision system for modeling human interactions. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 831–843 (2000)
Jiménez-Hernández, H.: Background subtraction approach based on independent component analysis. Sensors 10(6), 6092–6114 (2010)
Chu, Y., Wu, X., Liu, T., Liu, J.: A basis-background subtraction method using non-negative matrix factorization. In: Proceedings of the Second International Conference on Digital Image Processing (Vol. 7546, p. 75461A). International Society for Optics and Photonics (2010)
Hu, W., Li, X., Zhang, X., Shi, X., Maybank, S., Zhang, Z.: Incremental tensor subspace learning and its applications to foreground segmentation and tracking. Int. J. Comput. Vis. 91(3), 303–327 (2011)
Farcas, D., Bouwmans, T.: Background modeling via a supervised subspace learning. In: Proceedings of the International Conference on Image, Video Processing and Computer Vision, IVPCV (pp. 1–7) (2010)
Farcas, D., Marghes, C., Bouwmans, T.: Background subtraction via incremental maximum margin criterion: a discriminative subspace approach. Mach. Vis. Appl. 23(6), 1083–1101 (2012)
Marghes, C., Bouwmans, T., Vasiu, R.: Background modeling and foreground detection via a reconstructive and discriminative subspace learning approach. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition, IPCV 2012 (2012)
Javed, S., Mahmood, A., Bouwmans, T., Jung, S.K.: Spatiotemporal low-rank modeling for complex scene background initialization. IEEE Trans. Circuits Syst. Video Technol. 28(6), 1315–1329 (2016)
He, J., Balzano, L., Szlam, A.: Incremental gradient on the grassmannian for online foreground and background separation in subsampled video. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1568–1575). IEEE (2012)
Chouvardas, S., Kopsinis, Y., Theodoridis, S.: Robust subspace tracking with missing entries: the set-theoretic approach. IEEE Trans. Signal Process. 63(19), 5060–5070 (2015)
Xie, Y., Huang, J., Willett, R.: Change-point detection for high-dimensional time series with missing data. IEEE J. Select. Top. Signal Process. 7(1), 12–27 (2012)
Schofield, A.J., Mehta, P.A., Stonham, T.J.: A system for counting people in video images using neural networks to identify the background scene. Pattern Recogn. 29(8), 1421–1428 (1996)
Tavakkoli, A.: Foreground-background segmentation in video sequences using neural networks. Intell. Syst. Neural Netw. Appl. (2005).
Culibrk, D., Marques, O., Socek, D., Kalva, H., Furht, B.: Neural network approach to background modeling for video object segmentation. IEEE Trans. Neural Netw. 18(6), 1614–1627 (2007)
Luque, R. M., Domínguez, E., Palomo, E. J., Muñoz, J.: A neural network approach for video object segmentation in traffic surveillance. In: Proceedings of the International Conference Image Analysis and Recognition (pp. 151–158). Springer, Berlin, Heidelberg (2008)
Maddalena, L., Petrosino, A.: The 3dSOBS+ algorithm for moving object detection. Comput. Vis. Image Understand. 122, 65–73 (2014)
Ramirez-Quintana, J.A., Chacon-Murguia, M.I.: Self-adaptive SOM-CNN neural system for dynamic object detection in normal and complex scenarios. Pattern Recogn. 48(4), 1137–1149 (2015)
Gemignani, G., Rozza, A.: A robust approach for the background subtraction based on multi-layered self-organizing maps. IEEE Trans. Image Process. 25(11), 5239–5251 (2016)
Chacon-Murguia, M.I., Gonzalez-Duarte, S.: An adaptive neural-fuzzy approach for object detection in dynamic backgrounds for surveillance systems. IEEE Trans. Ind. Elect. 59(8), 3286–3298 (2011)
Palomo, E.J., Domínguez, E., Luque-Baena, R.M., Muñoz, J.: Image compression and video segmentation using hierarchical self-organization. Neural Process. Lett. 37(1), 69–87 (2013)
Bianco, S., Ciocca, G., Schettini, R.: Combination of video change detection algorithms by genetic programming. IEEE Trans. Evolut. Comput. 21(6), 914–928 (2017)
Yan, Y., Zhao, H., Kao, F. J., Vargas, V. M., Zhao, S., Ren, J.: Deep background subtraction of thermal and visible imagery for pedestrian detection in videos. In: Proceedings of the International Conference on Brain Inspired Cognitive Systems (pp. 75–84). Springer, Cham (2018)
Christiansen, P., Nielsen, L.N., Steen, K.A., Jørgensen, R.N., Karstoft, H.: DeepAnomaly: Combining background subtraction and deep learning for detecting obstacles and anomalies in an agricultural field. Sensors 16(11), 1904 (2016)
Sheri, A.M., Rafique, M.A., Jeon, M., Pedrycz, W.: Background subtraction using Gaussian-Bernoulli restricted Boltzmann machine. IET Image Process. 12(9), 1646–1654 (2018)
Rafique, A., Sheri, A. M., Jeon, M.: Background scene modeling for PTZ cameras using RBM. In: Proceedings of The 2014 International Conference on Control, Automation and Information Sciences (ICCAIS 2014) (pp. 165–169). IEEE (2014)
Xu, P., Ye, M., Li, X., Liu, Q., Yang, Y., Ding, J.: Dynamic background learning through deep auto-encoder networks. In: Proceedings of the Proceedings of the 22nd ACM international conference on Multimedia (pp. 107–116) (2014)
Qu, Z., Yu, S., Fu, M.: Motion background modeling based on context-encoder. In: Proceedings of the 2016 Third International Conference on Artificial Intelligence and Pattern Recognition (AIPR) (pp. 1–5). IEEE (2016)
Wang, Y., Luo, Z., Jodoin, P.M.: Interactive deep learning method for segmenting moving objects. Pattern Recogn. Lett. 96, 66–75 (2017)
Lim, L. A., Keles, H. Y.: Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding (2018)
Lim, L.A., Keles, H.Y.: Foreground segmentation using convolutional neural networks for multiscale feature encoding. Pattern Recogn. Lett. 112, 256–262 (2018)
Lim, L. A., Keles, H.Y.: Learning multi-scale features for foreground segmentation. Pattern Anal. Appl. 1–12 (2019)
Yang, L., Li, J., Luo, Y., Zhao, Y., Cheng, H., Li, J.: Deep background modeling using fully convolutional network. IEEE Trans. Intell. Transp. Syst. 19(1), 254–262 (2017)
Zeng, D., Zhu, M.: Multiscale fully convolutional network for foreground object detection in infrared videos. IEEE Geosci. Remote Sens. Lett. 15(4), 617–621 (2018)
Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)
Wang, R., Bunyak, F., Seetharaman, G., Palaniappan, K.: Static and moving object detection using flux tensor with split gaussian models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 414–418) (2014)
Ferryman, J., Shahrokni, A.: An overview of the pets 2009 challenge. In: Proceedings of Eleventh IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (pp. 25–30) (2009)
Li, X., Ye, M., Liu, Y., Zhu, C.: Adaptive deep convolutional neural networks for scene-specific object detection. IEEE Trans. Circuits Syst. Video Technol. 29(9), 2538–2550 (2017)
Chen, Y., Wang, J., Zhu, B., Tang, M., Lu, H.: Pixel-wise deep sequence learning for moving object detection. IEEE Trans. Circuits Syst. Video Technol. 29(9), 2567–2579 (2017)
Sakkos, D., Liu, H., Han, J., Shao, L.: End-to-end video background subtraction with 3d convolutional neural networks. Multimedia Tools Appl. 77(17), 23023–23041 (2018)
Vosters, L., Shan, C., Gritti, T.: Real-time robust background subtraction under rapidly changing illumination conditions. Image Vis. Comput. 30(12), 1004–1015 (2012)
Hu, Z., Turki, T., Phan, N., Wang, J.T.: A 3D atrous convolutional long short-term memory network for background subtraction. IEEE Access 6, 43450–43459 (2018)
Gao, Y., Cai, H., Zhang, X., Lan, L., Luo, Z.: Background subtraction via 3D convolutional neural networks. In: Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 1271–1276). IEEE (2018)
Lim, K., Jang, W. D., Kim, C. S.: Background subtraction using encoder-decoder structured convolutional neural network. In: Proceedings of the 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1–6). IEEE (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014)
Sultana, M., Mahmood, A., Javed, S., Jung, S.K.: Unsupervised deep context prediction for background estimation and foreground segmentation. Mach. Vis. Appl. 30(3), 375–395 (2019)
Zheng, W., Wang, K., Wang, F.: Background subtraction algorithm based on Bayesian generative adversarial networks. Acta AutomaticaSinica 44(5), 878–890 (2018)
Zheng, W., Wang, K., Wang, F.Y.: A novel background subtraction algorithm based on parallel vision and Bayesian GANs. Neurocomputing 394, 178–200 (2020)
Bakkay, M. C., Rashwan, H. A., Salmane, H., Khoudour, L., Puigtt, D., Ruichek, Y.: BSCGAN: Deep background subtraction with conditional generative adversarial networks. In: Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 4018–4022). IEEE (2018)
Gracewell, J., John, M.: Dynamic background modeling using deep learning autoencoder network. Multimedia Tools Appl. 79(7), 4639–4659 (2020)
Farnoosh, A., Rezaei, B., Ostadabbas, S.: DeepPBM: deep probabilistic background model estimation from video sequences (2019)
Liao, J., Guo, G., Yan, Y., Wang, H.: Multiscale cascaded scene-specific convolutional neural networks for background subtraction. In: Proceedings of the Pacific Rim Conference on Multimedia (pp. 524–533). Springer, Cham (2018)
Mandal, M., Dhar, V., Mishra, A., Vipparthi, S.K.: 3dfr: A swift 3d feature reductionist framework for scene independent change detection. IEEE Signal Process. Lett. 26(12), 1882–1886 (2019)
Mandal, M., Vipparthi, S. K.: Scene independency matters: An empirical study of scene dependent and scene independent evaluation for CNN-based change detection. IEEE Trans. Intell. Transp. Syst. (2020)
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Proceedings of the International Conference on Medical image computing and computer-assisted intervention (pp. 234–241). Springer, Cham (2015)
Kim, J.Y., Ha, J.E.: Foreground objects detection using a fully convolutional network with a background model image and multiple original images. IEEE Access 8, 159864–159878 (2020)
Tezcan, O., Ishwar, P., Konrad, J.: BSUV-Net: a fully-convolutional neural network for background subtraction of unseen videos. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (pp. 2774–2783) (2020)
Tezcan, M.O., Ishwar, P., Konrad, J.: BSUV-Net 2.0: Spatio-Temporal Data Augmentations for Video-Agnostic Supervised Background Subtraction. IEEE Access 9, 53849–53860 (2021)
Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: Proceedings of the European conference on computer vision (pp. 751–767). Springer, Berlin, Heidelberg (2000)
Maddalena, L., Petrosino, A.: The SOBS algorithm: What are the limits? In: Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 21–26). IEEE (2012)
Chen, A.T.Y., Biglari-Abhari, M., Kevin, I., Wang, K.: SuperBE: computationally light background estimation with superpixels. J Real Time Image Process. 16(6), 2319–2335 (2019)
Chen, Y.Q., Sun, Z.L., Lam, K.M.: An effective subsuperpixel-based approach for background subtraction. IEEE Trans. Ind. Elect. 67(1), 601–609 (2019)
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)
Xu, Z., Min, B., Cheung, R.C.: A robust background initialization algorithm with superpixel motion detection. Signal Process. Image Commun. 71, 1–12 (2019)
Zeng, D., Chen, X., Zhu, M., Goesele, M., Kuijper, A.: Background subtraction with real-time semantic segmentation. IEEE Access 7, 153869–153884 (2019)
Zhao, H., Qi, X., Shen, X., Shi, J., Jia, J.: Icnet for real-time semantic segmentation on high-resolution images. In: Proceedings of the European Conference on Computer Vision (ECCV) (pp. 405–420) (2018)
Cioppa, A., Braham, M., Van Droogenbroeck, M.: Asynchronous semantic background subtraction. J. Imag. 6(6), 50 (2020)
Giraldo, J. H., Bouwmans, T.: GraphBGS: background subtraction via recovery of graph signals (2020)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision (pp. 2961–2969) (2017)
Funding
No funds or other support was received.
Author information
Authors and Affiliations
Contributions
Rudrika Kalsotra had the idea for the article, performed the literature survey, carried out experiments and data analysis, and drafted the article. Dr. Sakshi Arora critically revised the work.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kalsotra, R., Arora, S. Background subtraction for moving object detection: explorations of recent developments and challenges. Vis Comput 38, 4151–4178 (2022). https://doi.org/10.1007/s00371-021-02286-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-021-02286-0