Pose and Gaze Estimation in Multi-camera Networks for Non-restrictive HCI

  • Chung-Ching Chang
  • Chen Wu
  • Hamid Aghajan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4796)

Abstract

Multi-camera networks offer potentials for a variety of novel human-centric applications through provisioning of rich visual information. In this paper, face orientation analysis and posture analysis are combined as components of a human-centered interface system that allows the user’s intentions and region of interest to be estimated without requiring carried or wearable sensors. In pose estimation, image observations at the cameras are first locally reduced to parametrical descriptions, and Particle Swarm Optimization (PSO) is then used for optimization of the kinematics chain of the 3D human model. In face analysis, a discrete-time linear dynamical system (LDS), based on kinematics of the head, combines the local estimates of the user’s gaze angle produced by the cameras and employs spatiotemporal filters to correct any inconsistencies. Knowing the intention and the region of interest of the user facilitates further interpretation of human behavior, which is the key to non-restrictive and intuitive human-centered interfaces. Applications in assisted living, speaker tracking, and gaming can benefit from such unobtrusive interfaces.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Deutscher, J., Blake, A., Reid, I.D.: Articulated body motion capture by annealed particle filtering. In: CVPR, pp. 126–133 (2000)Google Scholar
  2. 2.
    Sminchisescu, C., Triggs, B.: Estimating articulated human motion with covariance scaled sampling. International Journal of Robotics Research 22(6), 371–393 (2003)CrossRefGoogle Scholar
  3. 3.
    Sidenbladh, H., Black, M.J., Sigal, L.: Implicit probabilistic models of human motion for synthesis and tracking. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 784–800. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Cheung, K.M., Baker, S., Kanade, T.: Shape-from-silhouette across time: Part ii: Applications to human modeling and markerless motion tracking. International Journal of Computer Vision 63(3), 225–245 (2005)CrossRefGoogle Scholar
  5. 5.
    Mikic, I., Trivedi, M., Hunter, E., Cosman, P.: Human body model acquisition and tracking using voxel data. Int. J. Comput. Vision 53(3), 199–223 (2003)CrossRefGoogle Scholar
  6. 6.
    Sigal, L., Black, M.J.: Predicting 3d people from 2d pictures. In: Perales, F.J., Fisher, R.B. (eds.) AMDO 2006. LNCS, vol. 4069, pp. 185–195. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Wu, C., Aghajan, H.: Layered and collaborative gesture analysis in multi-camera networks. In: ICASSP (April 2007)Google Scholar
  8. 8.
    Ivecovic, S., Trucco, E.: Human body pose estimation with pso. In: IEEE Congress on Evolutionary Computation, pp. 1256–1263. IEEE Computer Society Press, Los Alamitos (2006)CrossRefGoogle Scholar
  9. 9.
    Robertson, C., Trucco, E.: Human body posture via hierarchical evolutionary optimization. In: BMVC 2006, vol. III, p. 999 (2006)Google Scholar
  10. 10.
    Chang, C., Aghajan, H.: Linear dynamic data fusion techniques for face orientation estimation in smart camera networks. In: ICDSC 2007, Vienna, Austria (to appear, 2007)Google Scholar
  11. 11.
    Rice, J.A.: Mathematical Statistics and Data Analysis, 3rd edn. Thomson Brooks/Cole, California, USA (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Chung-Ching Chang
    • 1
  • Chen Wu
    • 1
  • Hamid Aghajan
    • 1
  1. 1.Wireless Sensor Networks Lab, Stanford University, Stanford, CA 94305USA

Personalised recommendations