Incremental Perspective Motion Model for Rigid and Non-rigid Motion Separation

  • Tzung-Heng Lai
  • Te-Hsun Wang
  • Jenn-Jier James Lien
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4872)


Motion extraction is an essential work in facial expression analysis because facial expression usually experiences rigid head rotation and non-rigid facial expression simultaneously. We developed a system to separate non-rigid motion from large rigid motion over an image sequence based on the incremental perspective motion model. Since the parameters of this motion model are able to not only represnt the global rigid motion but also localize the non-rigid motion, thus this motion model overcomes the limitations of existing methods, the affine model and the 8-parameter perspective projection model, in large head rotation angles. In addition, since the gradient descent approach is susceptible to local minimum during the motion parameter estimation process, a multi-resolution approach is applied to optimize initial values of parameters at the coarse level. Finally, the experimental result shows that our model has promising performance of separating non-rigid motion from rigid motion.


Separating rigid and non-rigid motion incremental perspective motion model multi-resolution approach 


  1. 1.
    Anderson, K., McOwan, P.W.: A Real Time Automated System for the Recognition of Human Facial Expression. IEEE Tran. on SMC 36(1), 96–105 (2006)Google Scholar
  2. 2.
    Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Fully Automatic Facial Action Recognition in Spontaneous Behavior. In: International Conf. on FG, pp. 223–230 (2006)Google Scholar
  3. 3.
    Bergen, J.R., Anandan, P., Hanna, K.J., Hingorani, R.: Hierarchical model-based motion estimation. In: Proc. of ECCV 1992, pp. 237–252 (May 1992)Google Scholar
  4. 4.
    Black, M., Yacoob, Y.: Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion. IJCV, 23–48 (1997)Google Scholar
  5. 5.
    Braathen, B., Bartlett, M.S., Littlewort, G., Smith, E., Movellan, J.R.: An Approach to Automatic Recognition of Spontaneous Facial Actions. In: International Conf. on FG, pp. 345–350 (2002)Google Scholar
  6. 6.
    Ekman, P., Friesen, W.V.: Facial Action Coding System. Consulting Psychologist Press Inc, San Francisco, CA (1978)Google Scholar
  7. 7.
    Gokturk, S.B., Bouguet, J.Y., Tomasi, C., Girod, B.: Model-Based Face Tracking for View-Independent Facial Expression Recognition. In: International Conf. on FG, pp. 272–278 (2002)Google Scholar
  8. 8.
    Hua, W.: Building Facial Expression Analysis System. CMU Tech. Report (1998)Google Scholar
  9. 9.
    Lien, J.J., Kanade, T., Cohn, J.F., Li, C.C.: Subtly Different Facial Expression Recognition and Expression Intensity Estimation. In: CVPR, pp. 853–859 (1998)Google Scholar
  10. 10.
    Lucey, S., Matthews, I., Hu, C., Ambadar, Z., de la Torre, F., Cohn, J.: AAM Derived Face Representations for Robust Facial Action Recognition. In: International Conf. on FG, pp. 155–160 (2006)Google Scholar
  11. 11.
    Rosenblum, M., Yacoob, Y., Davis, L.S.: Human Emotion Recognition from Motion Using a Radial Basis Function Network Architecture, Uni. of Maryland, CS-TR-3304 (1994)Google Scholar
  12. 12.
    Szeliski, R., Shum, H.: Creating Full View Panoramic Image Mosaics and Environment Maps. In: Proc. of Siggraph 1997 (August 1997)Google Scholar
  13. 13.
    Tian, Y., Kanade, T., Cohn, J.F.: Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity. In: International Conference on FG, pp. 218–223 (2002)Google Scholar
  14. 14.
    De la Torre, F., Yacoob, Y., Davis, L.S.: A Probabilistic Framework for Rigid and Non-rigid Appearance Based Tracking and Recognition. In: International Conf. on FG, pp. 491–498 (2000)Google Scholar
  15. 15.
    Twu, J.T., Lien, J.J.: Estimation of Facial Control-Point Locations. In: IPPR Conf. on Computer Vision, Graphics and Image Processing (2004)Google Scholar
  16. 16.
    Yacoob, Y., Davis, L.S.: Recognizing Human Facial Expressions from Long Image Sequence Using Optical Flow. IEEE Tran. on PAMI 18(6), 636–642 (1996)Google Scholar
  17. 17.
    Zhang, Y., Ji, Q.: Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences. IEEE Tran. on PAMI 27, 699–714 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Tzung-Heng Lai
    • 1
  • Te-Hsun Wang
    • 1
  • Jenn-Jier James Lien
    • 1
  1. 1.Robotics Laboratory, CSIE, NCKU, Tainan, TaiwanR.O.C.

Personalised recommendations