Skip to main content
Log in

Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Object tracking under complex circumstances is a challenging task because of background interference, obstacle occlusion, object deformation, etc. Given such conditions, robustly detecting, locating, and analyzing a target through single-feature representation are difficult tasks. Global features, such as color, are widely used in tracking, but may cause the object to drift under complex circumstances. Local features, such as HOG and SIFT, can precisely represent rigid targets, but these features lack the robustness of an object in motion. An effective method is adaptive fusion of multiple features in representing targets. The process of adaptively fusing different features is the key to robust object tracking. This study uses a multi-feature joint descriptor (MFJD) and the distance between joint histograms to measure the similarity between a target and its candidate patches. Color and HOG features are fused as the tracked object of the joint representation. This study also proposes a self-adaptive multi-feature fusion strategy that can adaptively adjust the joint weight of the fused features based on their stability and contrast measure scores. The mean shift process is adopted as the object tracking framework with multi-feature representation. The experimental results demonstrate that the proposed MFJD tracking method effectively handles background clutter, partial occlusion by obstacles, scale changes, and deformations. The novel method performs better than several state-of-the-art methods in real surveillance scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Wang, J., Bebis, G., Miller, R.: Robust video-based surveillance by integrating target detection with tracking. In: Proc. Conf. CVPRW OTCBVS, pp. 137–145 (2006)

  2. Stauffer, C., Grimson, W.: Learning patterns of activity using real time tracking. IEEE Trans. Patt. Anal. Mach. Intell. 22(8), 747–757 (2000)

    Article  Google Scholar 

  3. Sun, L., Klank, U., Beetz, M.: EYEWATCHME: 3-D hand and object tracking for inside out activity analysis. In: Proc. IEEE Comput. Soc Conf. CVPR Workshops, pp. 9–16 (2009)

  4. Nguyen, T.H.D., Qui, T.C.T., Xu, K.: Real-time 3D human capture system for mixed-reality art and entertainment. IEEE Trans. Vis. Comput. Gr. 11(6), 706–721 (2005)

    Article  Google Scholar 

  5. Luo, H., Ci, S., Wu, D., Stergiou, N., Siu, K.: A remote markerless human gait tracking for e-healthcare based on content-aware wireless multimedia communications. IEEE Wireless Commun. 17(1), 44–50 (2010)

    Article  Google Scholar 

  6. Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cyber.-C 34(3), 334–352 (2004)

    Article  Google Scholar 

  7. Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking–learning–detection. IEEE. Trans. Pattern. Anal. Mach. Intell. 34(7), 1409–1422 (2012)

    Article  Google Scholar 

  8. Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)

    Article  Google Scholar 

  9. Zhang, K., Song, H.: Real-time visual tracking via online weighted multiple instance learning. Pattern Recognit. 46(1), 397–411 (2013)

    Article  MATH  Google Scholar 

  10. Sun, L., Liu, G.: Visual object tracking based on combination of local description and global representation. IEEE Trans. Circuits Syst. Video Technol. 21(4), 408–420 (2011)

    Article  Google Scholar 

  11. Zoidi, O., Tefas, A., Pitas, I.: Visual object tracking based on local steering kernels and color histograms. IEEE Trans. Circuits Syst. Video Technol. 23(5), 870–882 (2013)

    Article  Google Scholar 

  12. Wu, B., Nevatia, R.: Optimizing discrimination-efficiency tradeoff in integrating heterogeneous local features for object detection. In: CVPR, pp. 1–8 (2008)

  13. Wang, X., Han, T., Yan, S.: A HOG–LBP human detector with partial occlusion handling. In: ICCV, pp. 32–39 (2009)

  14. Yang, F., Lu, H., Yang, M.: Robust visual tracking via multiple kernel boosting with affinity constraints. IEEE Trans. Circuits Syst. Video Technol. 24, 242–254 (2013)

    Article  Google Scholar 

  15. Zhou, H., Yuan, Y., Shi, C.: Object tracking using SIFT features and mean shift. Comput. Vis. Image Underst. 113(3), 345–352 (2009)

  16. Stern, H., Efros, B.: Adaptive color space switching for face tracking in multi-color lighting environment. In: Proc. IEEE Int. Conf. on Automatice Face and Gesture Recognition, Washington, DC, pp. 249–254 (2002)

  17. Wang, J., Yagi, Y.: Integrating color and shape-texture features for adaptive real-time object tracking. IEEE Trans. Image Process. 17(2), 235–240 (2008)

    Article  Google Scholar 

  18. Nedovic, V., Liem, M., Corzilius, M., Smids, M.: Kernel-based object tracking using adaptive feature selection. Project Report (2005)

  19. Woodley, T., Stenger, B., Cipolla, R.: Tracking using online feature selection and a local generative model. BMVC, pp. 1–10 (2007)

  20. Zhang, K., Zhang, L., Yang, M.: Real-time object tracking via online discriminative feature selection. IEEE Trans. Image Process. 22(12), 4664–4677 (2013)

    Article  MathSciNet  Google Scholar 

  21. Zhang, L., Zhang, K., Yang, M., et al.: Robust object tracking via active feature selection. IEEE Trans. Circuits Syst. Video Technol. 23(11), 1957–1967 (2013)

    Article  Google Scholar 

  22. Li, G., Huang, Q., Pang, J., Jiang, S., Qin, L.: Online selection of the best k-feature subset for object tracking. Elsevier, 23(2), pp. 254–263 (2011)

  23. Yoon, J.H., Kim, D.Y., Yoon, K.J.: Visual tracking via adaptive tracker selection with multiple features. Computer Vision—ECCV 2012. Springer, Berlin, pp. 28–41 (2012).

  24. Spengler, M., Schiele, B.: Towards robust multi-cue integration for visual tracking. Mach. Vis. Appl. 14(1), 50–58 (2003)

    Article  Google Scholar 

  25. Leichter, I., Lindenbaum, M., Rivlin, E.: A general framework for combining visual trackers—the “black boxes” approach. Int. J. Comput. Vis. 67(3), 343–363 (2006)

    Article  Google Scholar 

  26. Badrinarayanan, V., Perez, P., Le Clerc, F., et al.: Probabilistccolor and adaptive multi-feature tracking with dynamically switched priority between cues. Computer Vision, ICCV, IEEE 11th International Conference on IEEE, pp. 1–8 (2007)

  27. Collins, R., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1631–1643 (2005)

    Article  Google Scholar 

  28. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)

    Article  Google Scholar 

  29. Kailath, T.: The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Technol. 15(1), 52–60 (1967)

    Article  Google Scholar 

  30. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. Comput. Vis. Pattern Recognit 1, 886–893 (2005)

    Google Scholar 

  31. Ning, J., Zhang, L., Zhang, D., Wu, C.: Robust mean-shift tracking with corrected background-weighted histogram. IET Comput. Vis. 6, 62–69 (2012)

    Article  MathSciNet  Google Scholar 

  32. Wang, Q., Chen, F., Yang, J., Xu, W., Yang, M.: Transferring visual prior for online object tracking. IEEE Trans. Image Process. 21(7), 3296–3305 (2012)

    Article  MathSciNet  Google Scholar 

  33. Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. Computer Vision—ECCV 2012. Springer, Berlin, pp. 864–877 (2012)

Download references

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant No. 91320103), National high technology research and development program (863) (Grant No. 2012AA01A301-01), the Research Foundation of Industry-education-research Cooperation among Guangdong Province, Ministry of Education and Ministry of science and Technology, China (Grant No. 2011A091000027) and the Research Foundation of Industry-education-research Cooperation of Huizhou, Guangdong (Grant No. 2012C050012012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyong Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Z., He, S. & Hashem, M. Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis. Vis Comput 31, 1319–1337 (2015). https://doi.org/10.1007/s00371-014-1014-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-014-1014-6

Keywords

Navigation