Advertisement

Gait Analysis and Human Motion Tracking

  • Huiyu Zhou
Part of the Studies in Computational Intelligence book series (SCI, volume 332)

Abstract

We present a strategy based on human gait to achieve efficient tracking, recovery of ego-motion and 3-D reconstruction from an image sequence acquired by a single camera attached to a pedestrian. In the first phase, the parameters of the human gait are established by a classical frame-by-frame analysis, using an generalised least squares (GLS) technique. The gait model is non-linear, represented by a truncated Fourier series. In the second phase, this gait model is employed within a “predict-correct” framework using a maximum a posteriori, expectation maximization (MAP-EM) strategy to obtain robust estimates of the ego-motion and scene structure, while continuously refining the gait model. Experiments on synthetic and real image sequences show that the use of the gait model results in more efficient tracking. This is demonstrated by improved matching and retention of features, and a reduction in execution time, when processing video sequences.

Keywords

Feature Point Gait Analysis Epipolar Line Human Gait Structure From Motion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Azarbayejiani, A.J., Galyean, T., Horowitz, B., Pentland, A.: Recursive estimation for CAD model recovery. In: Proc. of the 2nd CAD-Based Vision Workshop, pp. 90–97. IEEE Computer Society, Los Alamitos (1994)CrossRefGoogle Scholar
  2. 2.
    Beaton, A., Tukey, J.: The fitting of power series, meaning polynormials, illustrated on band-spectroscopic data. Technometrics 16, 147–185 (1974)CrossRefMATHGoogle Scholar
  3. 3.
    Broida, T., Chellappa, R.: Estimation of object motion parameters from noisy images. IEEE Trans. on Pattern Analysis and Machine Intelligence 8, 90–99 (1986)CrossRefGoogle Scholar
  4. 4.
    Cappozzo, A.: Analysis of the linear displacement of the head and trunk during walking at different speeds. J. Biomechanics 14, 411–425 (1981)CrossRefGoogle Scholar
  5. 5.
    Choi, K., Carcassoni, M., Hancock, E.: Recovering facial pose with the EM algorithm. Pattern Recognition 35(10), 2073–2093 (2002)CrossRefMATHGoogle Scholar
  6. 6.
    Coorg, S., Teller, S.: Spherical mosaics with quaternions and dense correlation. International Journal of Computer Vision 37(3), 259–273 (2000)CrossRefMATHGoogle Scholar
  7. 7.
    Cross, R., Hancock, E.: Graph matching with a dual-step EM algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1236–1253 (1998)CrossRefGoogle Scholar
  8. 8.
    Davison, A., Molton, N., Reid, I., Stasse, O.: MonoSLAM: Real-time single camera slam. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)CrossRefGoogle Scholar
  9. 9.
    Demidenko, E., Stukel, T.A.: Efficient estimation of general linear mixed effects models. J. of Stat. Plan. and Infer. 104, 197–219 (2002)CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Dempster, A.: Covariance selection. Biometrics 28, 157–175 (1971)CrossRefGoogle Scholar
  11. 11.
    Dempster, A., Laird, N., Rubin, D.: Maximal likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc. B 39, 1–38 (1977)MATHMathSciNetGoogle Scholar
  12. 12.
    Han, B., Comaniciu, D., Zhu, Y., Davis, L.: Sequential kernel density approximation and its application to real-time visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 30(7), 1186–1197 (2008)CrossRefGoogle Scholar
  13. 13.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004)MATHGoogle Scholar
  14. 14.
    Jang, D., Choi, H.: Active models for tracking moving objects. Pattern Recognition 33, 1135–1146 (2000)CrossRefGoogle Scholar
  15. 15.
    Jones, R., Brelsford, W.: Time series with periodic structure. Biometrika 54(3-4), 403–408 (1992)CrossRefMathSciNetGoogle Scholar
  16. 16.
    Laird, N., Ware, J.: Random-effects models for longitudinal data. Biometrics 38, 963–974 (1982)CrossRefMATHGoogle Scholar
  17. 17.
    Lepetit, V., Fua, P.: Monocular model-based 3D tracking of rigid objects: a survey. Foundations and Trends in Computer Graphics and Vision 1(1), 1–89 (2005)CrossRefGoogle Scholar
  18. 18.
    Meilijson, I.: A fast improvement to the EM algorithm on its own terms. J. R. Statist. Soc. B 51, 127–138 (1989)MATHMathSciNetGoogle Scholar
  19. 19.
    Molton, N., Brady, J.: Modelling the motion of a sensor attached to a walking person. Robotics and Autonomous Systems 34, 203–221 (2001)CrossRefMATHGoogle Scholar
  20. 20.
    Nava, F., Martel, A.: Wavelet modeling of contour deformations in Sobolev spaces for fitting and tracking applications. Pattern Recognition 36, 1119–1130 (2003)CrossRefMATHGoogle Scholar
  21. 21.
    Nistér, D.: Preemptive RANSAC for live structure and motion estimation. In: Proc. of the Ninth IEEE International Conference on Computer Vision, p. 199. IEEE Computer Society, Washington (2003)CrossRefGoogle Scholar
  22. 22.
    Oliensis, J.: A critique of structure-from-motion algorithms. Computer Vision and Image Understanding 80(2), 172–214 (2000)CrossRefMATHGoogle Scholar
  23. 23.
    Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes in C: the Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1992)Google Scholar
  24. 24.
    Rao, C.: The theory of least squares when the parameters are stochastic and its application to the analysis of growth curves. Biometrika 52, 447–458 (1965)MATHMathSciNetGoogle Scholar
  25. 25.
    Ristic, B., Arulampam, S., Gordon, N.: Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House, Boston (2004)MATHGoogle Scholar
  26. 26.
    Schindler, K., James, U., Wang, H.: Perspective n-view multibody structure-and-motion through model selection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 606–619. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  27. 27.
    Shi, J., Tomasi, C.: Good features to track. In: International Conference on Computer Vision and Pattern Recognition, pp. 593–600 (1994)Google Scholar
  28. 28.
    Smith, S., Brady, J.: SUSAN: A new approach to low-level image-processing. Int. J. of Comput. Vis. 23(1), 45–78 (1997)CrossRefGoogle Scholar
  29. 29.
    Snavely, N., Seitz, S., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: SIGGRAPH, pp. 835–846. Press (2006)Google Scholar
  30. 30.
    Sutherland, D., Olshen, R., Biden, E., Wyatt, M.: The development of mature walking. Blackwell Scientific Publications, Malden (1988)Google Scholar
  31. 31.
    Tao, D., Li, X., Wu, X., Maybank, S.: Elapsed time in human gait recognition: A new approach. In: Proc. of International Conference on Acoustics, Speech, and Signal Processing, France, pp. 177–180 (2006)Google Scholar
  32. 32.
    Tao, D., Li, X., Wu, X., Maybank, S.: General tensor discriminant analysis and Gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1700–1715 (2007)CrossRefGoogle Scholar
  33. 33.
    Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: A factorization method. Int. J. of Comput. Vis. 9(2), 137–154 (1992)CrossRefGoogle Scholar
  34. 34.
    Vidal, R., Hartley, R.: Three-view multibody structure from motion. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 214–227 (2008)CrossRefGoogle Scholar
  35. 35.
    Vidal, R., Ma, Y., Hsu, S., Sastry, S.: Optimal motion estimation from multiview normalized epipolar constraint. In: Proc. Int’l Conf. Computer Vision, pp. 34–41 (2001)Google Scholar
  36. 36.
    Xu, G., Zhang, Z.: Epipolar Geometry in Stereo, Motion and Object Recognition: a Unified Approach. Kluwer Academic Publishers, Dordrecht (1996)Google Scholar
  37. 37.
    Xu, Y., Roy-Chowdhury, A.: Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(5), 793–806 (2007)CrossRefGoogle Scholar
  38. 38.
    Zhang, Z.: On the optimization criteria used in two-view motion analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 717–729 (1998)CrossRefGoogle Scholar
  39. 39.
    Zhang, Z., Deriche, R., Faugeras, O., Luong, Q.: A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence 78, 87–119 (1995)CrossRefGoogle Scholar
  40. 40.
    Zhou, H.: Efficient motion tracking and obstacle detection using gait analysis. Ph.D. thesis, Heriot-Watt University, Edinburgh, UK (2005)Google Scholar
  41. 41.
    Zhou, H., Green, P., Wallace, A.: Efficient motion tracking using gait analysis. In: Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 601–604 (2004)Google Scholar
  42. 42.
    Zhou, H., Green, P., Wallace, A.: Fundamental matrix estimation using generalized least squares. In: Proc. of International Conference on Visualisation, Imaging, and Image Processing, pp. 79–85 (2004)Google Scholar
  43. 43.
    Zhou, H., Wallace, A.M., Green, P.R.: A multistage filtering technique to detect hazards on the ground plane. Pattern Recognition Letters 24, 1453–1461 (2003)CrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Huiyu Zhou
    • 1
  1. 1.Queen’s University BelfastBelfastUnited Kingdom

Personalised recommendations