Advertisement

A novel part-based approach to mean-shift algorithm for visual tracking

  • Jae Pil Hwang
  • Jeonghyun Baek
  • Baehoon Choi
  • Euntai KimEmail author
Regular Papers Robotics and Automation

Abstract

Visual tracking is one of the most important problems considered in computer vision. To improve the performance of the visual tracking, a part-based approach will be a good solution. In this paper, a novel method of visual tracking algorithm named part-based mean-shift (PBMS) algorithm is presented. In the proposed PBMS, unlike the standard mean-shift (MS), the target object is divided into multiple parts and the target is tracked by tracking each individual part and combining the results. For the part-based visual tracking, the objective function in the MS is modified such that the target object is represented as a combination of the parts and iterative optimization solution is presented. Further, the proposed PBMS provides a systematic and analytic way to determine the scale of the bounding box for the target from the perspective of the objective function optimization. Simulation is conducted with several benchmark problems and the result shows that the proposed PBMS outperforms the standard MS.

Keywords

Mean-shift tracking part based approach scale adaptive tracking visual tracking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    C.-M. Hunag and L.-C. Fu, “Multitarget visual tracking based effective surveillance with cooperation of multiple active cameras,” IEEE Trans. Syst., Man, and Cybern., Part B: Cybern., vol. 41, no. 1, pp. 234–247, 2011.CrossRefGoogle Scholar
  2. [2]
    J. K. Aggarwal and Q. Cai, “Human motion analysis: a review,” Comp. Vis. Img. Under., vol. 73, no. 3, pp. 428–440, 1999.CrossRefGoogle Scholar
  3. [3]
    Z. Jia, Z. Li, F. Liu, J. Zhao, and P. Peng, “Robust visual surveillance based traffic information analysis and forewarning in urban dynamics scenes,” Proc. Intell. Vehicles Symp., pp. 813–818, 2010.Google Scholar
  4. [4]
    H. Veeraraghavan, O. Masoud, and N. P. Papanikolopoul, “Computer vision algorithms for intersection monitoring,” IEEE Trans. ITS, vol. 4, no. 2, pp. 78–89, 2003.Google Scholar
  5. [5]
    J. Shao, Z. Jia, Z. Li, F. Liu, J. Zhao, and P. Peng, “Robust visual surveillance based traffic information analysis and forewarning in urban dynamic scenes,” Proc. Intell. Vehicles Symp., pp. 813–818, 2010.Google Scholar
  6. [6]
    C.-M. Huang and L.-C. Fu, “Multitarget visual tracking based effective surveillance with cooperation of multiple active cameras,” IEEE Trans. Syst., Man, and Cybern., Part B: Cybern., vol. 41, no. 1, pp. 234–247.Google Scholar
  7. [7]
    A. Broggi, A. Cappalunga, C. Caraffi, S. Cattani, S. Ghidoni, P. Grosero, P. P. Porta, M. Posterli, and P. Zani, “TerraMax vision at the urban challenge 2001,” IEEE Trans. ITS, vol. 11, no. 1, pp. 194–205, 2010.Google Scholar
  8. [8]
    L. Bai, Y. Wang, and M. Fairhurst, “Multiple condensation filters for road detection and tracking,” Pattern Anal. Apllic., vol. 13, no. 3, pp. 251–262, 2010.CrossRefMathSciNetGoogle Scholar
  9. [9]
    J. P. Hwang, S. E. Cho, K. J. Ryu, S. Park, and E. Kim, “Multi-classifier based LIDAR and camera fusion,” Proc. Conf. ITS 2007, pp. 467–472, 2007.Google Scholar
  10. [10]
    D. Comaniciu, V. Ramesh, and P. Meer, “Kernelbased object tracking,” IEEE Trans. Pat. Anal. Mach. Intel., vol. 25, no. 5, pp. 564–577, 2003.CrossRefGoogle Scholar
  11. [11]
    K. Fukunaga and L. D. Hostetler, “The estimation of the gradient of a density function, with applications in pattern recognition,” IEEE Trans. Inform. Theory., vol. 21, no. 1, pp.32–40, 1975.CrossRefzbMATHMathSciNetGoogle Scholar
  12. [12]
    Y. Cheng, “Mean shift, mode seeking, and clustering,” IEEE Trans. Pat. Anal. Mach. Intel., vol. 17, no. 8, pp. 790–799, 1995.CrossRefGoogle Scholar
  13. [13]
    D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using mean shift,” Proc. IEEE Conf. CVPR, vol. 2, pp. 142–149, June 2000.Google Scholar
  14. [14]
    S. Avidan, “Support vector tracking,” Proc. IEEE conf. CVPR, vol. 1, pp. I184–I191, 2001.Google Scholar
  15. [15]
    S. Avidan, “Support vector tracking,” IEEE Trans. PAMI, vol. 26, no. 8, pp. 1064–1072, 2004.CrossRefGoogle Scholar
  16. [16]
    S. Avidan, “Ensemble tracking,” Proc. IEEE Conf. CVPR, vol. 2, pp. 494–501, 2005.Google Scholar
  17. [17]
    S. Avidan, “Ensemble tracking,” IEEE Trans. PAMI, vol. 29, no. 2, pp. 261–271, 2007.CrossRefGoogle Scholar
  18. [18]
    H. Liu, Z. Yu, H. Zha, Y. Zou, and L. Zhang, “Robust human tracking based on multi-cue integration and mean-shift,” Patt. Recog. Lett., vol. 30, no. 9, pp. 827–837, 2009.CrossRefGoogle Scholar
  19. [19]
    H. Zhou, Y. Yuan, and C. Shi, “Object tracking using SIFT features and mean shift,” Comp. Vis. Img. Under., vol. 113, no. 3, pp. 345–352, 2009.CrossRefGoogle Scholar
  20. [20]
    P. L. M. Bouttefroy, A. Bouzerdoum, S. L. Phung, and A. Beghdadi, “Vehicle tracking by non-drifting mean-shift using projective Kalman filter,” Proc. 11th IEEE conf. ITS, pp. 61–66, 2008.Google Scholar
  21. [21]
    C. Shan, T. Tan, and Y. Wei, “Real-time hand tracking using a mean shift embedded particle filter,” Patt. Recog., vol. 40, no. 7, pp. 1958–1970, 2007.CrossRefzbMATHGoogle Scholar
  22. [22]
    J. Jeyakar, R. V. Babu, and K. R. Ramakrishnan, “Robust object tracking with background-weighted local kernels,” Comp. Vis. Img. Under., vol. 112, no. 3, pp. 296–309, 2008.CrossRefGoogle Scholar
  23. [23]
    R. V. Babu, P. Perez, and P. Bouthemy, “Robust tracking with motion estimation and local kernelbased color modeling,” Img. Vis. Comp., vol. 25, no. 8, pp. 1205–1216, 2007.CrossRefGoogle Scholar
  24. [24]
    S.-X. Li, H.-X. Chang, and C.-F. Zhu, “Adaptive pyramid mean shift for global real-time visual tracking,” Img. Vis. Comp., vol. 28, no. 3, pp. 424–437, 2010.CrossRefGoogle Scholar
  25. [25]
    M. Lucena, J. M. Fuertes, N. P. de la Blanca, and M. J. Marin-Jimenez, “Tracking People in video sequences using multiple models,” Multimed. Tools Appl., vol. 49, no. 2, pp. 371, 2010.CrossRefGoogle Scholar
  26. [26]
    H. Yin, Y. Cao,, H. Sun, and W. Yang, “Visual tracking by threshold and scale-based particle filter,” Proc. of SPIE, vol. 6786, pp. 678631–1–678631–8, 2007.CrossRefGoogle Scholar
  27. [27]
    J. Wang and Y. Yagi, “Patch-based adaptive tracking using spatial and appearance information,” Proc. ICIP, pp. 1564–1567, 2008.Google Scholar
  28. [28]
    J. Wang and Y. Yagi, “Visual tracking and segmentation using appearance and spatial information of patches,” Proc. IEEE ICRA, pp. 4553–4558, 2010.Google Scholar
  29. [29]
    Y. Zha, Y. Yang, and D. Bi, “Graph-based transductive learning for robust visual tracking,” Pattern Recog., vol. 43, no. 1, pp. 187–196, 2010.CrossRefzbMATHGoogle Scholar
  30. [30]
    M. Wu, X. Peng Q. Zhang, and R. Zhao, “Patchesbased Markov random field model for multiple object tracking under occlusion,” Signal Process., vol. 90, no. 5, pp. 1518–1529, 2010.CrossRefzbMATHGoogle Scholar
  31. [31]
    A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” Proc. IEEE Conf. CVPR, vol. 1, pp. 798–805, 2006.Google Scholar
  32. [32]
    D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis., no. 1–3, vol. 77, pp. 125–141, 2008.CrossRefGoogle Scholar
  33. [33]
    Caviar dataset, http://homepages.inf.ed.ac.uk/rbf/ CAVIAR/.Google Scholar
  34. [34]
    P. Pérez, C. Hue, J. Vermaak, and M. Gangnet, “Color-based probabilistic tracking,” Proc. of ECCV 2002 LNCS, pp. 661–675, 2002.Google Scholar

Copyright information

© Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers and Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Jae Pil Hwang
    • 1
  • Jeonghyun Baek
    • 2
  • Baehoon Choi
    • 2
  • Euntai Kim
    • 2
    Email author
  1. 1.R&D DivisionHyundai Motor GroupSeoulKorea
  2. 2.School of Electrical and Electronic EngineeringYonsei University, Sinchon-dong, Seodaemun-guSeoulKorea

Personalised recommendations