Advertisement

Articulated body motion tracking using illumination invariant optical flow

Regular Papers Robotics and Automation

Abstract

We propose a model-based tracking method for articulated objects in monocular video sequences under varying illumination conditions. The tracking method uses estimates of optical flows constructed by projecting model textures into the camera images and comparing the projected textures with the recorded information. An articulated body is modelled in terms of 3D primitives, each possessing a specified texture on its surface. An important step in model-based tracking of 3D objects is the estimation of the pose of the object during the tracking process. The optimal pose is estimated by minimizing errors between the computed optical flow and the projected 2D velocities of the model textures. This estimation uses a least-squares method with kinematic constraints for the articulated object and a perspective camera model. We test our framework with an articulated robot and show results.

Keywords

Articulated body tracking illumination invariance motion tracking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    B. Akan, M. Cetin, and A. Ercil, “3D head tracking using normal flow constraints in a vehicle environment,” Biennial on DSP for in-Vehicle and Mobile Systems, 2007.Google Scholar
  2. [2]
    M. J. Black and P. Anandan, “A framework for the robust estimation of optical flow,” Proc. of ICCV, pp. 231–236, Berlin, Germany, May 1993.Google Scholar
  3. [3]
    A. Bottino, “Real time head and facial features tracking from uncalibrated monocular views,” Proc. of ACCV, Melbourne, Australia, January, 2002.Google Scholar
  4. [4]
    C. Bregler, “Learning and recognizing human dynamics in video sequences,” Proc. of CVPR, San Juan, Puerto Rico, 1997.Google Scholar
  5. [5]
    C. Cedras and M. Shah, “Motion based recognition: a survey,” Image and Vision Computing, vol. 13, no. 2, pp. 129–155, March 1995.CrossRefGoogle Scholar
  6. [6]
    S. Denman, V. Chandra, and S. Sridharan, “An adpative optical flow technique for person tracking systems,” Pattern Recognition Letters, vol. 28, no. 10, pp. 1232–1239, 2007.CrossRefGoogle Scholar
  7. [7]
    J. Deutscher and A. Blake, “Articulated body motion capture by annealed particle filtering,” Proc. of CVPR, 2000.Google Scholar
  8. [8]
    K. S. Fu, R. C. Gonzales, and C. S. G. Lee, Robotics, Control, Sensing, Vision, and Intelligence, McGraw-Hill, New York, 1987.Google Scholar
  9. [9]
    M. A. Gennert and S. Negahdaripour, “Relaxing the brightness constancy assumption in computing optical flow,” MIT AI Lab. Memo, no. 975, 1987.Google Scholar
  10. [10]
    R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, ISBN 0-521-54051-8, 2003.Google Scholar
  11. [11]
    Y Huang and T. S. Huang, “2-D Model-based human body tracking,” Proc. of ICPR, pp. 552–555, Quebec City, Canada, August 2002.Google Scholar
  12. [12]
    B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185–203, 1981.CrossRefGoogle Scholar
  13. [13]
    S. X. Ju, M. J. Black, and Y. Yacoob, “Cardboard People: a parameterized model of articulated image motion,” Proc. of AFGR, Killington, Vermont, USA, 1996.Google Scholar
  14. [14]
    Y.-H. Kim, A. M. Martinez, and A. C. Kak, “Robust notion estimation under varying illumination,” Image and Vision Computing, vol. 23, no. 4, pp. 365–375, 2005.CrossRefGoogle Scholar
  15. [15]
    O. Kwon, J. Chun, and P. Park, “Cylindrical modelbased head tracking and 3D pose recovery from sequential face images,” Proc. of International Conference on Hybrid Information Technology, vol. 1, pp. 135–139, 2006.CrossRefGoogle Scholar
  16. [16]
    M. Malciu and F. Preteux, “A robust model-based approach for 3D head tracking in video sequences,” Proc. of AFGR, 2000.Google Scholar
  17. [17]
    T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231–268, 2001.MATHCrossRefGoogle Scholar
  18. [18]
    S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 9, pp. 961–979, September 1998.CrossRefGoogle Scholar
  19. [19]
    Y. Motai and A. Kosaka, “A SmartView: Hand-eye robotic calibration for active viewpoint generation and object grasping,” Proc. of IEEE Int. Conference of Robotics and Automation, pp. 2183–2190, 2001Google Scholar
  20. [20]
    A. Pentland and B. Horowitz, “Recovery of nonrigid motion and structure,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 13, pp. 730–742, 1991.CrossRefGoogle Scholar
  21. [21]
    H. A. Rowley and J. M. Rehg, “Analyzing articulated motion using expectation-maximization,” Proc. of CVPR, pp. 935–941, 1997.Google Scholar
  22. [22]
    H. Sidenbladh, F. D. Torre, and M. J. Black, “A Framework for modeling the appearance of 3D articulated figures,” Proc. of AFGR, 2000.Google Scholar
  23. [23]
    C. Sminchisescu and B. Triggs, “A Robust multiple hypothesis approach to monocular human motion tracking,” INRIA Research Report No. 4208, June 2001.Google Scholar
  24. [24]
    M. Yamamoto and K. Koshikawa, “Human motion analysis based on a robot arm model,” Proc. of CVPR, pp. 664–665, 1991.Google Scholar
  25. [25]
    T. Tung and T. Matsuyama, “Human motion tracking using a color-based particle filter driven by optical flow,” Proc. of the 1st International Workshop on Machine Learning for Vision-based Motion Analysis-MLVMA’08, France, 2008.Google Scholar

Copyright information

© Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers and Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.Robot Navigation Group (SAIT)Samsung Electronics Co., LtdNongseo-dong, Giheung-gu, Yongin-si, Gyeonggi-doKorea
  2. 2.Dept. of Electrical EngineeringSeoul National University of TechnologySeoulKorea

Personalised recommendations