Automatic Eigentemplate Learning for Sparse Template Tracker

  • Keiji Sakabe
  • Tomoyuki Taguchi
  • Takeshi Shakunaga
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5414)


Automatic eigentemplate learning is discussed for a sparse template tracker. It is known that a sparse template tracker can effectively track a moving target using an eigentemplate when it is appropriately prepared for a motion class or for an illumination class. However, it has not been easy to prepare an eigentemplate automatically for any image sequences. This paper provides a feasible solution to this problem in the framework of sparse template tracking. In the learning phase, the sparse template tracker adaptively tracks a target object in a given image sequence when the first template is provided in the first image. By selecting a small number of representative and effective images, we can make up an eigentemplate by the principal component analysis. Once the eigentemplate learning is accomplished, the sparse template tracker can work with the eigentemplate instead of an adaptive template. Since the sparse eigentemplate tracker doesn’t require any adaptive tracking, it can work more efficiently and effectively for image sequences in the class of learned appearance changes. Experimental results are provided for real-time face tracking when eigentemplates are learned for pose changes and for illumination changes, respectively.


  1. 1.
    Black, M., Jepson, A.: Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision 26(1), 63–84 (1998)CrossRefGoogle Scholar
  2. 2.
    Shakunaga, T., Matsubara, Y., Noguchi, K.: Appearance tracker based on sparse eigentemplate. In: Proc. Int’l Conf. on Machine Vision & Applications, pp. 13–17 (2005)Google Scholar
  3. 3.
    Shakunaga, T., Noguchi, K.: Robust tracking of appearance by sparse template adaptation. In: Proc. 8th IASTED Int’l Conf. on Signal and Image Processing, pp. 85–90 (2006)Google Scholar
  4. 4.
    Jepson, A.D., Fleet, D.J., El-Maraghi, T.F.: Robust online appearance models for visual tracking. IEEE Trans. Pattern Analysis and Machine Intelligence 25(10), 1296–1311 (2003)CrossRefGoogle Scholar
  5. 5.
    Moghaddam, B., Pentland, A.: Probabilistic visual learning for object representation. IEEE Trans. Pattern Analysis and Machine Intelligence 19(7), 696–710 (1997)CrossRefGoogle Scholar
  6. 6.
    Hager, G.D., Belhumeur, P.N.: Efficient region tracking with parametric models of geometry and illumination. IEEE Trans. Pattern Analysis and Machine Intelligence 20(10), 1025–1039 (1998)CrossRefGoogle Scholar
  7. 7.
    Isard, M., Blake, A.: Condensation – conditional density propagation for visual tracking. International Journal of Computer Vision 29(1), 5–28 (1998)CrossRefGoogle Scholar
  8. 8.
    Cascia, M.L., Sclaroff, S., Athitsos, V.: Fast,reliable head tracking under varying illumination: An approach based on robust registration of texture-mapped 3d models. IEEE Trans. Pattern Analysis and Machine Intelligence 22(4), 322–336 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Keiji Sakabe
    • 1
  • Tomoyuki Taguchi
    • 1
  • Takeshi Shakunaga
    • 1
  1. 1.Okayama UniversityOkayamaJapan

Personalised recommendations