Skip to main content
Log in

Saliency detection in video sequences using perceivable change encoded local pattern

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

The detection of salient objects in video sequence is an active computer vision research topic. One approach is to perform joint segmentation of objects and background. The background scene is learned and modeled. A pixel is classified as salient if its features do not match with the background model. The segmentation process faces many difficulties when the video sequence is captured under various dynamic circumstances. To tackle these challenges, we propose a novel local ternary pattern for background modeling. The features derived from the local pattern are robust to random noise, scale transform of intensity and rotational transform. We also propose a novel scheme for matching a pixel with the background model within a spatiotemporal domain. Furthermore, we devise two feedback mechanisms for maintaining the quality of the result over a long video. First, the background model is updated immediately based on the background subtraction result. Second, the detected object is enhanced by adjustment of the segmentation conditions in proximity via a propagation scheme. We compare our method with state-of-the-art background subtraction algorithms using various video datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Hsieh, J.-W., Hsu, Y.-T., Liao, H.-Y.M., Chen, C.-C.: Video-based human movement analysis and its application to surveillance systems. IEEE Trans. Multimed. 10(3), 372–384 (2008)

    Article  Google Scholar 

  2. Lu, W., Tan, Y.-P.: A vision-based approach to early detection of drowning incidents in swimming pools. IEEE Trans. Circuits Syst. Video Technol. 14(2), 159–178 (2004)

    Article  Google Scholar 

  3. Mahadevan, V., Vasconcelos, N.: Spatiotemporal saliency in dynamic scenes. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 171–177 (2010)

    Article  Google Scholar 

  4. Tang, P., Gao, L., Liu, Z.: Salient moving object detection using stochastic approach filtering. Proc. Int. Conf. Image Graph. 530–535 (2007)

  5. Bouwmans, T.: Recent advanced statistical background modeling for foreground detection—a systematic survey. Recent Pat. Comput. Sci. 4(3), 147–176 (2011)

    Google Scholar 

  6. Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122, 4–21 (2014)

    Article  Google Scholar 

  7. Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)

    Article  Google Scholar 

  8. Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. Proc. Int. Conf. Pattern Recognit. 28–31 (2004)

  9. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.S.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 90(7), 1151–1163 (2002)

    Article  Google Scholar 

  10. Barnich, O., Van Droogenbroeck, M.: ViBe: a powerful random technique to estimate the background in video sequences. Proc. Int. Conf. Acous. Speech Signal Process. 945–948 (2009)

  11. Heikkilä, M., Pietikäinen, M.: A texture-based method for modeling the background and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)

    Article  Google Scholar 

  12. Liao, S., Zhao, G., Kellokumpu, V., Pietikäinen, M., Li, S.Z.: Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 1301–1306 (2010)

  13. Ma, F., Sang, N.: Background subtraction based on multi-channel SILTP. Proc. Asian Conf. Comput. Vis. 73–84 (2012)

  14. Ji, Z., Wang, W.: Detect foreground objects via adaptive fusing model in a hybrid feature space. Pattern Recognit. 47, 2952–2961 (2014)

    Article  Google Scholar 

  15. Lin, L., Xu, Y., Liang, X., Lai, J.: Complex background subtraction by pursuing dynamic spatio-temporal models. IEEE Trans. Image Process. 23(7), 3191–3202 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Wu, H., Liu, N., Luo, X., Su, J., Chen, L.: Real-time background subtraction-based video surveillance of people by integrating local texture patterns. Signal Image Video Process. 8, 665–676 (2014)

    Article  Google Scholar 

  17. Van Droogenbroeck, M., Paquot, O.: Background subtraction: experiments and improvements for ViBe. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 32–37 (2012)

  18. Kim, S.W., Yun, K., Yi, K.M., Kim, S.J., Choi, J.Y.: Detection of moving objects with a moving camera using non-panoramic background model. Mach. Vis. Appl. 24, 1015–1028 (2013)

    Article  Google Scholar 

  19. Eng, H.-L., Wang, J., Siew Wah, A.H.K., Yau, W.-Y.: Robust human detection within a highly dynamic aquatic environment in real time. IEEE Trans. Image Process. 15(6), 1583–1600 (2006)

    Article  Google Scholar 

  20. Sheikh, Y., Shah, M.: Bayesian object detection in dynamic scenes. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 1, 74–79 (2005)

    Google Scholar 

  21. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Pearson/Prentice Hall, Upper Saddle River (2010)

    Google Scholar 

  22. Salomon, D.: Coding for Data and Computer Communications. Springer, Boston (2005)

    MATH  Google Scholar 

  23. Gevers, T., Smeulders, A.W.M.: Color based object recognition. Pattern Recognit. 32, 453–464 (1999)

    Article  Google Scholar 

  24. Heikkilä, M., Pietikäinen, M., Heikkilä, J.: A texture-based method for detecting moving objects. Proc. Br. Mach. Vis. Conf. 187–196 (2004)

  25. Goyette, N., Jodoin, P.-M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 16–21 (2012)

  26. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. Proc. Int. Conf. Comput. Vis. 255–261 (1999)

  27. Li, L., Huang, W., Gu, I.Y.-H., Tian, Q.: Statistical modelling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. L. Chan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chan, K.L. Saliency detection in video sequences using perceivable change encoded local pattern. SIViP 12, 975–982 (2018). https://doi.org/10.1007/s11760-018-1242-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-018-1242-8

Keywords

Navigation