Human body tracking by monocular vision
This article describes a tracking method of 3D articulated complex objects (for example, the human body), from a monocular sequence of perspective images. These objects and their associated articulations must be modelled. The principle of the method is based on the interpretation of image features as the 3D perspective projections points of the object model and an iterative Levenberg-Marquardt process to compute the model pose in accordance with the analysed image.
This attitude is filtered (Kalman filter) to predict the model pose relative to the following image of the sequence. The image features are extracted locally according to the computed prediction.
Tracking experiments, illustrated in this article by a cycling sequence, have been conducted to prove the validity of the approach.
Key-wordsmonocular vision articulated polyhedric model matchings localization tracking
Unable to display preview. Download preview PDF.
- 1.A.D. Winter. A new definition of mechanical work done in human movement. J. Aplli. Physiol., 46:79–83, 1979.Google Scholar
- 2.M.R. Yeadon. A method for obtaining three-dimensionnal data on ski jumping using pan and tilt camera. International Journal of Sport Biomechanics, 5:238–247, 1989.Google Scholar
- 3.G. Borgefors. On hierarchial edge matching in digital images using distance transformations. Internal Report of the Royal Inst. of Technology, Stockholm, 1986.Google Scholar
- 4.T. Persson and H. Lanshammar. Estimation of an object's position and orientation using model matching. In Proc. of the sixth I.S.B Congress, 1995.Google Scholar
- 5.A. Geurtz. Model Based Shape Estimation. PhD thesis, Ecole Polytechnique de Lausanne, 1993.Google Scholar
- 6.K. Rohr. Towards model based recognition of human movements in image sequences. Image understanding, 59(1):94–115, January 1994.Google Scholar
- 7.J. Wang. Analyse et Suivi de Mouvements 3D Articulés: Application à l'Etude du Mouvement Humain. PhD thesis, IFSIC, Université Rennes I, 1992.Google Scholar
- 8.I.J. Mulligan, A.K. Mackworth, and P.D. Lawrence. A model based vision system for manipulator position sensing. In Proc. of Workshop on Interpretation of 3D scenes, Austin, Texas, pages 186–193, 1990.Google Scholar
- 9.I.Kakadiaris, D.Metaxas, and R.Bajcsy. Active part-decomposition, shape and motion estimation of articulated objects: a physics-based approach. In Computer Vision and Pattern Recognition, 1994.Google Scholar
- 10.D.G. Lowe. Fitting parameterized three-dimensional models to images. PAMI, 13(5):441–450, May 1991.Google Scholar
- 11.A. Yassine. De la Localisation et du Suivi par Vision Monoculaire d'Objets Polyédriques Articulés Modélisés. PhD thesis, Université Blaise Pascal de Clermont-Ferrand, 1995.Google Scholar
- 12.M. Dhome, M. Richetin, J.T. Lapresté, and G. Rives. Determination of the attitude of 3d objects from a single perspective image. I.E.E.E Trans. on P.A.M.I, 11(12):1265–1278, December 1989.Google Scholar
- 13.N. Daucher, M. Dhome, J.T. Lapresté, and G. Rives. Modelled object pose estimation and tracking by monocular vision. In British Machine Vision Conference, volume 1, pages 249–258, 1993.Google Scholar
- 14.N. Ayache. Vision Stéréoscopique et Perception Multisensorielle. Inter Editions, 1987.Google Scholar
- 15.F. Lerasle, G. Rives, M. Dhome, and A. Yassine. Suivi du corps humain par vision monoculaire. In 10th Conf. on Reconnaissance des Formes et Intelligence Artificielle, January 1996.Google Scholar
- 16.E. Catmull. A Subdivision Algorithm for Computer Display of Curved Surfaces. PhD thesis, University of Utah, 1974.Google Scholar
- 17.D.W. Marquardt. Journal of the Society for Industrial and Applied Mathematics, 11:431–441, 1963.Google Scholar