Skip to main content

Superpixels-Guided Background Modeling Approach for Foreground Detection

  • Conference paper
  • First Online:
Recent Innovations in Computing

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 832))

  • 607 Accesses

Abstract

Foreground detection is one of the active research topics in computer vision and a precondition for intelligent video analytics. Background subtraction is often considered as a reliable method for extracting foreground from the video sequences. The real-world challenges such as intermittent object motion, dynamic backgrounds, bad weather conditions, shadows, and illumination variations affect the performance of foreground detection. Most of the conventional background subtraction methods result in faulty detection and fragmented foreground in case of complex scenes. In recent years, deep neural networks make revolutionary advances in the area of foreground detection. The convolutional neural networks based background subtraction methods are notable for their performance but their performance drop in case of unseen videos and incur high-computational cost. In this paper, we investigate the fusion of superpixel with an adaptive background modeling for foreground detection in several challenging videos. A non-iterative clustering method is adopted for superpixel clustering because of its computational efficiency. Integration of effective superpixel algorithm with background subtraction has paved the way for improvements in unsupervised background subtraction methods. The proposed method is evaluated on different videos of changedetection.net 2014 dataset in order to validate its effectiveness on critical challenges of background subtraction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. I.E. Olatunji, C.H. Cheng, Video analytics for visual surveillance and applications: an overview and survey. Mach. Learn. Paradigms, 475–515 (2019)

    Google Scholar 

  2. H. Liu, S. Chen, N. Kubota, Intelligent video systems and analytics: a survey. IEEE Trans. Industr. Inf. 9(3), 1222–1233 (2013)

    Article  Google Scholar 

  3. W. Hu, T. Tan, L. Wang, S. Maybank, A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. C (Appl. Rev.) 34(3), 334–352 (2004)

    Google Scholar 

  4. A. Sobral, A. Vacavant, A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122, 4–21 (2014)

    Article  Google Scholar 

  5. B. Garcia-Garcia, T. Bouwmans, A.J.R. Silva, Background subtraction in real applications: challenges, current models and future directions. Comput. Sci. Rev. 35, 100204 (2020)

    Google Scholar 

  6. T. Bouwmans, A. Sobral, S. Javed, S.K. Jung, E.H. Zahzah, Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset. Comput. Sci. Rev. 23, 1–71 (2017)

    Article  Google Scholar 

  7. L. Maddalena, A. Petrosino, Background subtraction for moving object detection in RGBD data: a survey. J. Imaging 4(5), 71 (2018)

    Article  Google Scholar 

  8. T. Bouwmans, Recent advanced statistical background modeling for foreground detection-a systematic survey. Recent Patents Comput. Sci. 4(3), 147–176 (2011)

    Google Scholar 

  9. T. Bouwmans, Traditional and recent approaches in background modeling for foreground detection: an overview. Comput. Sci. Rev. 11, 31–66 (2014)

    Article  Google Scholar 

  10. W. Kim, C. Jung, Illumination-invariant background subtraction: comparative review, models, and prospects. IEEE Access 5, 8369–8384 (2017)

    Article  Google Scholar 

  11. T. Bouwmans, L. Maddalena, A. Petrosino, Scene background initialization: taxonomy. Pattern Recogn. Lett. 96, 3–11 (2017)

    Article  Google Scholar 

  12. R. Kalsotra, S. Arora, A comprehensive survey of video datasets for background subtraction. IEEE Access 7, 59143–59171 (2019)

    Article  Google Scholar 

  13. M.N. Chapel, T. Bouwmans, Moving objects detection with a moving camera: a comprehensive review. Comput. Sci. Rev. 38, 100310 (2020)

    Google Scholar 

  14. T. Bouwmans, S. Javed, M. Sultana, S.K. Jung, Deep neural network concepts for background subtraction: a systematic review and comparative evaluation. Neural Netw. 117, 8–66 (2019)

    Article  Google Scholar 

  15. Y. Xu, J. Dong, B. Zhang, D. Xu, Background modeling methods in video analysis: a review and comparative evaluation. CAAI Trans. Intell. Technol. 1(1), 43–60 (2016)

    Article  Google Scholar 

  16. S.K. Choudhury, P.K. Sa, S. Bakshi, B. Majhi, An evaluation of background subtraction for object detection vis-a-vis mitigating challenging scenarios. IEEE Access 4, 6133–6150 (2016)

    Article  Google Scholar 

  17. X. Li, M. Ye, Y. Liu, C. Zhu, Adaptive deep convolutional neural networks for scene-specific object detection. IEEE Trans. Circuits Syst. Video Technol. 29(9), 2538–2551 (2017)

    Article  Google Scholar 

  18. M. Babaee, D.T. Dinh, G. Rigoll, A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)

    Article  Google Scholar 

  19. J. Dou, Q. Qin, Z. Tu, Background subtraction based on deep convolutional neural networks features. Multimedia Tools Appl. 78(11), 14549–14571 (2019)

    Article  Google Scholar 

  20. P.L. St-Charles, G.A. Bilodeau, R. Bergevin, Subsense: a universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2014)

    Article  MathSciNet  Google Scholar 

  21. P.L. St-Charles, G.A. Bilodeau, R. Bergevin, A self-adjusting approach to change detection based on background word consensus, in IEEE Winter Conference on Applications of Computer Vision (IEEE, 2015), pp. 990–997

    Google Scholar 

  22. I. Martins, P. Carvalho, L. Corte-Real, J.L. Alba-Castro, BMOG: boosted Gaussian mixture model with controlled complexity, in Iberian Conference on Pattern Recognition and Image Analysis (Springer, Cham, 2017), pp. 50–57

    Google Scholar 

  23. L. Maddalena, A. Petrosino, Self-organizing background subtraction using color and depth data. Multimedia Tools Appl. 78(9), 11927–11948 (2019)

    Article  Google Scholar 

  24. G. Gemignani, A. Rozza, A robust approach for the background subtraction based on multi-layered self-organizing maps. IEEE Trans. Image Process. 25(11), 5239–5251 (2016)

    Article  MathSciNet  Google Scholar 

  25. L.A. Lim, H.Y. Keles, Learning multi-scale features for foreground segmentation. Pattern Anal. Appl. 23(3), 1369–1380 (2020)

    Article  Google Scholar 

  26. L.A. Lim, H.Y. Keles, Foreground segmentation using convolutional neural networks for multiscale feature encoding. Pattern Recogn. Lett. 112, 256–262 (2018)

    Article  Google Scholar 

  27. L.A. Lim, H.Y. Keles, Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding. Preprint (2018). arXiv preprint arXiv:1801.02225

  28. W. Zheng, K. Wang, F.Y. Wang, A novel background subtraction algorithm based on parallel vision and Bayesian GANs. Neurocomputing 394, 178–200 (2020)

    Article  Google Scholar 

  29. W. Zheng, K. Wang, F. Wang, Background subtraction algorithm based on Bayesian generative adversarial networks. Acta Automatica Sinica 44(5), 878–890 (2018)

    Google Scholar 

  30. Y. Wang, P.M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, P. Ishwar, CDnet 2014: an expanded change detection benchmark dataset, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2014), pp. 387–394

    Google Scholar 

  31. C. Stauffer, W.E.L. Grimson, Adaptive background mixture models for real-time tracking, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Cat. No PR00149, vol. 2 (IEEE, 1999), pp. 246–252

    Google Scholar 

  32. O. Barnich, M. Van Droogenbroeck, ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2010)

    Article  MathSciNet  Google Scholar 

  33. M. Braham, M. Van Droogenbroeck, Deep background subtraction with scene-specific convolutional neural networks, in International Conference on Systems, Signals and Image Processing (IWSSIP) (IEEE, 2016), pp. 1–4

    Google Scholar 

  34. Y. Yan, H. Zhao, F.J. Kao, V.M. Vargas, S. Zhao, J. Ren, Deep background subtraction of thermal and visible imagery for pedestrian detection in videos, in International Conference on Brain Inspired Cognitive Systems (Springer, Cham, 2018), pp. 75–84

    Google Scholar 

  35. K. Lim, W.D. Jang, C.S. Kim, Background subtraction using encoder-decoder structured convolutional neural network, in 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (IEEE, 2017), pp 1–6

    Google Scholar 

  36. X. Zhao, Y. Chen, M. Tang, J. Wang, Joint background reconstruction and foreground segmentation via a two-stage convolutional neural network, in International Conference on Multimedia and Expo (ICME) (IEEE, 2017), pp. 343–348

    Google Scholar 

  37. D. Sakkos, H. Liu, J. Han, L. Shao, End-to-end video background subtraction with 3D convolutional neural networks. Multimedia Tools Appl. 77(17), 23023–23041 (2018)

    Article  Google Scholar 

  38. J. Liao, G. Guo, Y. Yan, H.Wang, Multiscale cascaded scene-specific convolutional neural networks for background subtraction, in Pacific Rim Conference on Multimedia (Springer, Cham, 2018), pp 524–533

    Google Scholar 

  39. L. Yang, J. Li, Y. Luo, Y. Zhao, H. Cheng, J. Li, Deep background modeling using fully convolutional network. IEEE Trans. Intell. Transp. Syst. 19(1), 254–262 (2017)

    Article  Google Scholar 

  40. J.H. Giraldo, T. Bouwmans, GraphBGS: background subtraction via recovery of graph signals. arXiv preprint arXiv:2001.06404 (2020).

  41. D. Zeng, X. Chen, M. Zhu, M. Goesele, A. Kuijper, Background subtraction with real-time semantic segmentation. IEEE Access 7, 153869–153884 (2019)

    Article  Google Scholar 

  42. A. Cioppa, M. Braham, M. Van Droogenbroeck, Asynchronous semantic background subtraction. J Imag 6(6), 50 (2020)

    Article  Google Scholar 

  43. C. Zhao, T. Zhang, Q. Huang, X. Zhang, D. Yang, Y. Qu, S. Huang, Background subtraction based on superpixels under multi-scale in complex scenes, in Chinese Conference on Pattern Recognition (Springer, Singapore, 2016), pp. 392–403

    Google Scholar 

  44. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Süsstrunk, SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)

    Article  Google Scholar 

  45. W. Fang, T. Zhang, C. Zhao, D.B. Soomro, R. Taj, H. Hu, Background subtraction based on random superpixels under multiple scales for video analytics. IEEE Access 6, 33376–33386 (2018)

    Article  Google Scholar 

  46. Y.Q. Chen, Z.L. Sun, K.M. Lam, An effective subsuperpixel-based approach for background subtraction. IEEE Trans. Industr. Electron. 67(1), 601–609 (2019)

    Article  Google Scholar 

  47. A.T.Y. Chen, M. Biglari-Abhari, I. Kevin, K. Wang, Superbe: computationally light background estimation with superpixels. J. Real-Time Image Proc. 16(6), 2319–2335 (2019)

    Article  Google Scholar 

  48. A. Zheng, T. Zou, Y. Zhao, B. Jiang, J. Tang, C. Li, Background subtraction with multi-scale structured low-rank and sparse factorization. Neurocomputing 328, 113–121 (2019)

    Article  Google Scholar 

  49. M. Nawaz, H. Yan, Saliency detection via multiple-morphological and superpixel based fast fuzzy C-mean clustering network. Expert Syst. Appl. 161, 113654 (2020)

    Google Scholar 

  50. M. Nawaz, H. Yan, Saliency detection using deep features and affinity-based robust background subtraction. IEEE Trans. Multimedia (2020)

    Google Scholar 

  51. Z. Xu, B. Min, R.C. Cheung, A robust background initialization algorithm with superpixel motion detection. Signal Process. Image Commun. 71, 1–12 (2019)

    Article  Google Scholar 

  52. W. Zhou, Y. Deng, B. Peng, D. Liang, S.I. Kaneko, Co-occurrence background model with superpixels for robust background initialization. arXiv preprint arXiv:2003.12931 (2020)

  53. R. Achanta, S. Susstrunk, Superpixels and polygons using simple non-iterative clustering, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4651–4660 (2017)

    Google Scholar 

  54. S. Piérard, M. Van Droogenbroeck, Summarizing the performances of a background subtraction algorithm measured on several videos, in IEEE International Conference on Image Processing (ICIP) (IEEE, 2020), pp. 3234–3238

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kalsotra, R., Arora, S. (2022). Superpixels-Guided Background Modeling Approach for Foreground Detection. In: Singh, P.K., Singh, Y., Kolekar, M.H., Kar, A.K., Gonçalves, P.J.S. (eds) Recent Innovations in Computing. Lecture Notes in Electrical Engineering, vol 832. Springer, Singapore. https://doi.org/10.1007/978-981-16-8248-3_25

Download citation

Publish with us

Policies and ethics