Advertisement

Machine Vision and Applications

, Volume 20, Issue 3, pp 163–173 | Cite as

Online updating appearance generative mixture model for meanshift tracking

  • Jilin TuEmail author
  • Hai Tao
  • Thomas Huang
Original Paper

Abstract

This paper proposes an appearance generative mixture model based on key frames for meanshift tracking. Meanshift tracking algorithm tracks an object by maximizing the similarity between the histogram in tracking window and a static histogram acquired at the beginning of tracking. The tracking therefore could fail if the appearance of the object varies substantially. In this paper, we assume the key appearances of the object can be acquired before tracking and the manifold of the object appearance can be approximated by piece-wise linear combination of these key appearances in histogram space. The generative process is described by a Bayesian graphical model. An Online EM algorithm is proposed to estimate the model parameters from the observed histogram in the tracking window and to update the appearance histogram. We applied this approach to track human head motion and to infer the head pose simultaneously in videos. Experiments verify that our online histogram generative model constrained by key appearance histograms alleviates the drifting problem often encountered in tracking with online updating, that the enhanced meanshift algorithm is capable of tracking object of varying appearances more robustly and accurately, and that our tracking algorithm can infer additional information such as the object poses.

Keywords

Visual Tracking Object Appearance Meanshift Algorithm Rear View Static Histogram 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

ESM(AVI 1490 kb)

ESM(AVI 3872 kb)

References

  1. Tao, H., Huang, T.: Explanation-based facial motion tracking using a piecewise bezier volume deformation model. In: Procedings of IEEE Computer Vision and Pattem Recognition CVPR’99, vol. 1, pp. 611–617 (1999)Google Scholar
  2. DeCarlo, D., Metaxas, D.: The integration of optical flow and deformable models with applications to human face shape and motion estimation. In: Proceedings of CVPR ’96, pp. 231–238 (1996)Google Scholar
  3. Blake A. and Isard M. (1998). Active Contours. Springer, Heidelberg Google Scholar
  4. Isard M. and Blake A. (1998). Condensation—conditional density propagation for visual tracking. Int. J. Comput. Vis. 29: 5–28 CrossRefGoogle Scholar
  5. Chen, Y., Rui, Y., Huang, T.S.: Mode-based multi-hypothesis head tracking using parametric contours. In: Proceedings of fifth International Conference on Automatic Face and Gesture Recognition, Washington D.C (2002)Google Scholar
  6. Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’00), vol 2., Hilton Head Island, South Carolina 142–149 (2000)Google Scholar
  7. Birchfield, S.: Elliptical head tracking using intensity gradients and color histograms. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 232–237 (1998)Google Scholar
  8. Morency, L.P., Rahimi, A., Checka, N., Darrell, T.: Fast stereo-based head tracking for interactive environment. In: Proceedings of the International Conference on Automatic Face and Gesture Recognition (2002)Google Scholar
  9. Black M. and Jepson A. (1998). Eigentracking: robust matching and tracking of articulated objects using a view-based representation. Int. J. Comput. Vis. 26: 63–84 CrossRefGoogle Scholar
  10. Lee K.C., Ho J., Yang M.H. and Kriegman D. (2005). Visual tracking and recognition using probabilistic appearance manifolds. Comput. Vis. Image Underst. (CVIU) 99: 303–331 CrossRefGoogle Scholar
  11. Ross, D., Lim, J., Yang, M.H.: Probabilistic visual tracking with incremental subspace update. In: ECCV, vol. 2, pp. 470–482 (2004)Google Scholar
  12. Ho, J., Lee, K.C., Yang, M.H., Kriegman, D.: Visual tracking using learned linear subspace. In: IEEE CVPR (2004)Google Scholar
  13. Lee, K.C., Kriegman, D.: Online learning of probabilistic appearance manifolds for video-based recognition and tracking. In: IEEE Conference On Computer Vision and Pattern Recognition (2005)Google Scholar
  14. Nguyen, H., Worring, M., van den Boomgaard, R.: Occlusion robust adaptive template tracking. In: ICCV, vol. 1, 678–683 (2001)Google Scholar
  15. Jepson A.D., Fleet D.J. and El-Maraghi T.F. (2003). Robust on-line appearance models for visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25: 1296–1311 CrossRefGoogle Scholar
  16. Collins, R.: Mean-shift blob tracking through scale space. In: Computer Vision and Pattern Recognition (CVPR’03), IEEE (2003)Google Scholar
  17. Zivkovic, Z., Krose, B.: An em-like algorithm for color-histogram-based object tracking. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 798–803 (2004)Google Scholar
  18. Porikli, F.: Human body tracking by adaptive background models and mean-shift analysis. In: IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (2003)Google Scholar
  19. Cheng Y. (1995). Mean shift, mode seeking and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 17: 790–799 CrossRefGoogle Scholar
  20. Fukunaga K. and Hostetler L. (1975). The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. Theory 21: 32–40 zbMATHCrossRefMathSciNetGoogle Scholar
  21. Birchfield, S.: http://www.ces.clemson.edu/ stb/research/headtracker/ (1998)

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  1. 1.Electrical and Computer Engineering DepartmentUniversity of Illinois at Urbana and ChampaignUrbanaUSA
  2. 2.Department of Computer EngineeringUniversity of CaliforniaSanta CruzUSA

Personalised recommendations