Advertisement

The Visual Computer

, Volume 33, Issue 2, pp 235–247 | Cite as

Object tracking by color distribution fields with adaptive hierarchical structure

  • Yawen WangEmail author
  • Hongchang Chen
  • Shaomei Li
  • Jianpeng Zhang
  • Chao Gao
Original Article

Abstract

The essence of visual tracking is to distinguish the target from background, so how to describe the difference between target and background is a key problem. In this paper, tracking algorithm by color distribution fields with adaptive hierarchical structure is presented to solve this problem. First, multichannel color distribution fields are presented for appearance modeling, which represents color distinction between the target and background. Second, in order to adapt to the individuality of each target, the hierarchical structure of its color distribution fields are generated via k-means cluster. Third, weighted multichannel \(L_1 \) distance is used to measure the similarity between the candidate region and the template; the weight of each channel is adjusted online according to its discrimination. Finally, a search strategy based on simulated annealing is proposed to improve the search efficiency and reduce the probability of falling into the local optimum. Experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art tracking algorithms.

Keywords

Object tracking Distribution field Appearance modeling Simulated annealing 

Notes

Acknowledgments

This work was supported by the National Nature Science Foundation of China (Nos. 61379151, 61521003), the Henan Province Science Found for Distinguished Young Scholars of China (No. 14410051-0001), the National Key Technology Research and Development Program of the Ministry of Science and Technology of China (No. 2014B-AH30B01).

References

  1. 1.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. (CSUR) 38(4), 13 (2006)CrossRefGoogle Scholar
  2. 2.
    Li, Z., Yu, X., Li, P., et al.: Moving object tracking based on multi-independent features distribution fields with comprehensive spatial feature similarity. Vis. Comput. 31(12), 1–19 (2014)Google Scholar
  3. 3.
    Wang, Z., Yoon, S., Xie, S.J., et al.: Visual tracking with semi-supervised online weighted multiple instance learning. Vis. Comput. 1–14 (2015). doi: 10.1007/s00371-015-1067-1
  4. 4.
    Birchifield, S.: Elliptical head tracking using intensity gradients and color histograms. In: Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232–237 (1998)Google Scholar
  5. 5.
    Szeliski, R.: Image alignment and stitching: a tutorial. Found. Trends Comput. Graph. Vis. 2(1), 1–104 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)CrossRefGoogle Scholar
  7. 7.
    Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1910–1917 (2012)Google Scholar
  8. 8.
    Geusebroek, J.M., Van den Boomgaard, R., Smeulders, A.W.M., et al.: Color invariance. IEEE Trans. Pattern Anal. Mach. Intell. 23(12), 1338–1350 (2001)CrossRefGoogle Scholar
  9. 9.
    Burghouts, G.J., Geusebroek, J.M.: Performance evaluation of local colour invariants. IEEE Trans. Comput. Vis. Image Underst. 113(1), 48–62 (2009)CrossRefGoogle Scholar
  10. 10.
    Hager, G.D., Belhumeur, P.N.: Efficient region tracking with parametric models of geometry and illumination. IEEE Trans. Pattern Anal. Mach. Intell. 20(10), 1025–1039 (1998)CrossRefGoogle Scholar
  11. 11.
    Hager, G.D., Dewan, M., Stewart, C.V.: Multiple kernel tracking with SSD. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 790–797 (2004)Google Scholar
  12. 12.
    Fan, Z., Wu, Y., Yang, M.: Multiple collaborative kernel tracking. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 502–509 (2005)Google Scholar
  13. 13.
    Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: Proceedings of the European Conference on Computer Vision, pp. 864–877 (2012)Google Scholar
  14. 14.
    Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  15. 15.
    Santner, J., Leistner, C., Saffari, A., et al.: Prost: Parallel robust online simple tracking. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 723–730 (2010)Google Scholar
  16. 16.
    He, S., Yang, Q., Lau, R.W., et al.: Visual tracking via locality sensitive histograms. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (2013)Google Scholar
  17. 17.
    Kalal, Z., Matas, J., Mikolajczyk, K.: P-n learning: bootstrapping binary classifiers by structural constraints. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (2010)Google Scholar
  18. 18.
    Pernici, F., Del Bimbo, A.: Object tracking by oversampling local features. IEEE Trans. Pattern Anal. Mach. Intell. 36(12), 2538–2551 (2014)CrossRefGoogle Scholar
  19. 19.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  20. 20.
    Mei, X., Ling, H.: Robust visual tracking using \(\ell _1 \) minimization. In: Proceedings of the International Conference on Computer Vision, pp. 1436–1443 (2009)Google Scholar
  21. 21.
    Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1838–1845 (2012)Google Scholar
  22. 22.
    Xie, C., Tan, J., Chen, P., et al.: Collaborative object tracking model with local sparse representation. J. Vis. Commun. Represent. 25(2), 423–434 (2014)CrossRefGoogle Scholar
  23. 23.
    Kwon, J., Lee, K.M.: Visual tracking decomposition. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (2010)Google Scholar
  24. 24.
    Kwon, J., Lee, K.M.: Tracking by sampling and integrating multiple trackers. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1428–1441 (2014)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Elgammal, A.M., Harwood, D., Davis, L.S.: Non-parametric model for background subtraction. In: Proceedings of the European Conference on Computer Vision (2000)Google Scholar
  26. 26.
    Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)CrossRefGoogle Scholar
  27. 27.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2005)Google Scholar
  28. 28.
    Learned-Miller, E.G.: Data driven image models through continuous joint alignment. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 236–250 (2006)CrossRefGoogle Scholar
  29. 29.
    Ning, J., Shi, W., Yang, S., et al.: Visual tracking based on distribution fields and online weighted multiple instance learning. Image Vis. Comput. 31(11), 853–863 (2013)CrossRefGoogle Scholar
  30. 30.
    Grabner, H., Bischof, H.: On-line boosting and vision. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 260–267 (2006)Google Scholar
  31. 31.
    Supancic, J.S., Ramanan, D.: Self-paced learning for long-term tracking. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2379–2386 (2013)Google Scholar
  32. 32.
    Ross, D.A., Lim, J., Lin, R.S., et al.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)CrossRefGoogle Scholar
  33. 33.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)CrossRefGoogle Scholar
  34. 34.
    Wang, N., Li, S., Gupta, A., et al.: Transferring rich feature hierarchies for robust visual tracking. arXiv:1501.04587 (preprint) (2015)
  35. 35.
    Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (2012)Google Scholar
  36. 36.
    Hare, S., Saffari, A., Torr, P.H.S.: Struck: structured output tracking with kernels. In: Prodeedings of the International Conference on Computer Vision (2011)Google Scholar
  37. 37.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: A benchmark. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Yawen Wang
    • 1
    Email author
  • Hongchang Chen
    • 1
  • Shaomei Li
    • 1
  • Jianpeng Zhang
    • 1
  • Chao Gao
    • 1
  1. 1.National Digital Switching System Engineering and Technological R&D CenterZhengzhouChina

Personalised recommendations