A Tuned Eigenspace Technique for Articulated Motion Recognition

  • M. Masudur Rahman
  • Antonio Robles-Kelly
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3951)


In this paper, we introduce a tuned eigenspace technique so as to classify human motion. The method presented here overcomes those problems related to articulated motion and dress texture effects by learning various human motions in terms of their sequential postures in an eigenspace. In order to cope with the variability inherent to articulated motion, we propose a method to tune the set of sequential eigenspaces. Once the learnt tuned eigenspaces are at hand, the recognition task then becomes a nearest-neighbor search over the eigenspaces. We show how our tuned eigenspace method can be used for purposes of real-world and synthetic pose recognition. We also discuss and overcome the problem related to clothing texture that occurs in real-world data, and propose a background subtraction method to employ the method in out-door environment. We provide results on synthetic imagery for a number of human poses and illustrate the utility of the method for the purposes of human motion recognition.


Recognition Rate Human Motion Motion Trajectory Motion Line Posture Matrix 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: International Conference on Computer Vision and Pattern Recognition, pp. 586–591 (1991)Google Scholar
  2. 2.
    Murase, H., Nayar, S.K.: Visual learning and recognition of 3-d objects from appearance. International Journal of Computer Vision 14(5), 39–50 (1995)Google Scholar
  3. 3.
    Kopp-Borotschnig, H., Paletta, L., Prantl, M., Pinz, A.: Appearance-based active object recognition. Image and Vision Computing 18(9), 715–727 (2000)CrossRefGoogle Scholar
  4. 4.
    Hall, P., Marshall, D., Martin, R.: Merging and splitting eigenspace models. IEEE Transaction on Pattern Analysis Machine Intelligence 22(9), 1042–1049 (2000)CrossRefGoogle Scholar
  5. 5.
    Shechtman, E., Irani, M.: Space–time behavioral correlation. In: Computer Vision and Pattern Recognition, pp. I:405–412 (2005)Google Scholar
  6. 6.
    Ohba, K., Ikeuchi, K.: Detectability, uniqueless and reliability of eigen windows for stable verifications of partially occluded objects. IEEE Transaction on Pattern Analysis Machine Intelligence 19, 1043–1047 (1997)CrossRefGoogle Scholar
  7. 7.
    Rahman, M.M., Ishikawa, S.: A robust recognition method for partially occluded/destroyed objects. In: Sixth Asian Conference on Computer Vision, pp. 984–988 (1996)Google Scholar
  8. 8.
    Leonardis, A., Bischof, H.: Robust recognition using eigenimages. Computer Vision and Image Understanding 78, 99–118 (2000)CrossRefMATHGoogle Scholar
  9. 9.
    Black, M.J., Fleet, D.J., Yacoob, Y.: Robustly estimating changes in image appearance. Computer Vision and Image Understanding 78(1), 8–31 (2000)CrossRefGoogle Scholar
  10. 10.
    Yilmaz, A., Gokmen, M.: Eigenhill vs. eigenface and eigenedge. Pattern Recognition (34), 181–184 (2001)Google Scholar
  11. 11.
    Gonzalez, R.C., Wintz, P.: Digital Image Processing. Addison-Wesley, Reading (1986)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • M. Masudur Rahman
    • 1
  • Antonio Robles-Kelly
    • 1
  1. 1.National ICT AustraliaANUAustralia

Personalised recommendations