Abstract
The detection of salient objects in video sequence is an active computer vision research topic. One approach is to perform joint segmentation of objects and background. The background scene is learned and modeled. A pixel is classified as salient if its features do not match with the background model. The segmentation process faces many difficulties when the video sequence is captured under various dynamic circumstances. To tackle these challenges, we propose a novel local ternary pattern for background modeling. The features derived from the local pattern are robust to random noise, scale transform of intensity and rotational transform. We also propose a novel scheme for matching a pixel with the background model within a spatiotemporal domain. Furthermore, we devise two feedback mechanisms for maintaining the quality of the result over a long video. First, the background model is updated immediately based on the background subtraction result. Second, the detected object is enhanced by adjustment of the segmentation conditions in proximity via a propagation scheme. We compare our method with state-of-the-art background subtraction algorithms using various video datasets.
Similar content being viewed by others
References
Hsieh, J.-W., Hsu, Y.-T., Liao, H.-Y.M., Chen, C.-C.: Video-based human movement analysis and its application to surveillance systems. IEEE Trans. Multimed. 10(3), 372–384 (2008)
Lu, W., Tan, Y.-P.: A vision-based approach to early detection of drowning incidents in swimming pools. IEEE Trans. Circuits Syst. Video Technol. 14(2), 159–178 (2004)
Mahadevan, V., Vasconcelos, N.: Spatiotemporal saliency in dynamic scenes. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 171–177 (2010)
Tang, P., Gao, L., Liu, Z.: Salient moving object detection using stochastic approach filtering. Proc. Int. Conf. Image Graph. 530–535 (2007)
Bouwmans, T.: Recent advanced statistical background modeling for foreground detection—a systematic survey. Recent Pat. Comput. Sci. 4(3), 147–176 (2011)
Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122, 4–21 (2014)
Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)
Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. Proc. Int. Conf. Pattern Recognit. 28–31 (2004)
Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.S.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 90(7), 1151–1163 (2002)
Barnich, O., Van Droogenbroeck, M.: ViBe: a powerful random technique to estimate the background in video sequences. Proc. Int. Conf. Acous. Speech Signal Process. 945–948 (2009)
Heikkilä, M., Pietikäinen, M.: A texture-based method for modeling the background and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)
Liao, S., Zhao, G., Kellokumpu, V., Pietikäinen, M., Li, S.Z.: Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 1301–1306 (2010)
Ma, F., Sang, N.: Background subtraction based on multi-channel SILTP. Proc. Asian Conf. Comput. Vis. 73–84 (2012)
Ji, Z., Wang, W.: Detect foreground objects via adaptive fusing model in a hybrid feature space. Pattern Recognit. 47, 2952–2961 (2014)
Lin, L., Xu, Y., Liang, X., Lai, J.: Complex background subtraction by pursuing dynamic spatio-temporal models. IEEE Trans. Image Process. 23(7), 3191–3202 (2014)
Wu, H., Liu, N., Luo, X., Su, J., Chen, L.: Real-time background subtraction-based video surveillance of people by integrating local texture patterns. Signal Image Video Process. 8, 665–676 (2014)
Van Droogenbroeck, M., Paquot, O.: Background subtraction: experiments and improvements for ViBe. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 32–37 (2012)
Kim, S.W., Yun, K., Yi, K.M., Kim, S.J., Choi, J.Y.: Detection of moving objects with a moving camera using non-panoramic background model. Mach. Vis. Appl. 24, 1015–1028 (2013)
Eng, H.-L., Wang, J., Siew Wah, A.H.K., Yau, W.-Y.: Robust human detection within a highly dynamic aquatic environment in real time. IEEE Trans. Image Process. 15(6), 1583–1600 (2006)
Sheikh, Y., Shah, M.: Bayesian object detection in dynamic scenes. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 1, 74–79 (2005)
Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Pearson/Prentice Hall, Upper Saddle River (2010)
Salomon, D.: Coding for Data and Computer Communications. Springer, Boston (2005)
Gevers, T., Smeulders, A.W.M.: Color based object recognition. Pattern Recognit. 32, 453–464 (1999)
Heikkilä, M., Pietikäinen, M., Heikkilä, J.: A texture-based method for detecting moving objects. Proc. Br. Mach. Vis. Conf. 187–196 (2004)
Goyette, N., Jodoin, P.-M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 16–21 (2012)
Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. Proc. Int. Conf. Comput. Vis. 255–261 (1999)
Li, L., Huang, W., Gu, I.Y.-H., Tian, Q.: Statistical modelling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chan, K.L. Saliency detection in video sequences using perceivable change encoded local pattern. SIViP 12, 975–982 (2018). https://doi.org/10.1007/s11760-018-1242-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-018-1242-8