Social Interactive Human Video Synthesis

  • Dumebi Okwechime
  • Eng-Jon Ong
  • Andrew Gilbert
  • Richard Bowden
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6492)


In this paper, we propose a computational model for social interaction between three people in a conversation, and demonstrate results using human video motion synthesis. We utilised semi-supervised computer vision techniques to label social signals between the people, like laughing, head nod and gaze direction. Data mining is used to deduce frequently occurring patterns of social signals between a speaker and a listener in both interested and not interested social scenarios, and the mined confidence values are used as conditional probabilities to animate social responses. The human video motion synthesis is done using an appearance model to learn a multivariate probability distribution, combined with a transition matrix to derive the likelihood of motion given a pose configuration. Our system uses social labels to more accurately define motion transitions and build a texture motion graph. Traditional motion synthesis algorithms are best suited to large human movements like walking and running, where motion variations are large and prominent. Our method focuses on generating more subtle human movement like head nods. The user can then control who speaks and the interest level of the individual listeners resulting in social interactive conversational agents.


Association Rule Mining Association Rule Texture Synthesis Video Texture Interested Scenario 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proc. of ACM SIGGRAPH, July 2002, vol. 21(3), pp. 473–482 (2002)Google Scholar
  2. 2.
    Agrawal, A., Imielinski, T., Swami, A.: Mining association rules between sets of items in large databases. In: Proc. of the 1993 ACM SIGMOD Int. Conf. on Management of Data SIGMOD 1993 (1993)Google Scholar
  3. 3.
    Szummer, M., Picard, R.: Temporal texture modeling. In: Proc. of IEEE Int. Conf. on Image Processing, pp. 823–826 (1996)Google Scholar
  4. 4.
    Efros, A., Leung, T.: Texture synthesis by non-paramteric sampling. In: Int. Conf. on Computer Vision, pp. 1033–1038 (1999)Google Scholar
  5. 5.
    Kwatra, V., Schodl, A., Essa, I., Turk, G., Bobick, A.: Graphcut textures. In: ACM Trans. on Graphics, SIGGRAPH 2003, vol. 22(3), pp. 277–286 (2003)Google Scholar
  6. 6.
    Bhat, K., Seitz, S., Hodgins, J., Khosla, P.: Flow-based video synthesis and editing. In: ACM Trans. on Graphics, SIGGRAPH 2004 (2004)Google Scholar
  7. 7.
    Troje, N.F.: Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. J. Vis. 2, 371–387 (2002)CrossRefGoogle Scholar
  8. 8.
    Pullen, K., Bregler, C.: Synthesis of cyclic motions with texture (2002)Google Scholar
  9. 9.
    Okwechime, D., Bowden, R.: A generative model for motion synthesis and blending using probability density estimation. In: Fifth Conference on Articulated Motion and Deformable Objects, Mallorca, Spain, (July 9-11, 2008)Google Scholar
  10. 10.
    Tanco, L.M., Hilton, A.: Realistic synthesis of novel human movements from a database of motion captured examples. In: Proc. of the IEE Workshop on Human Motion (HUMO 2000) (2000)Google Scholar
  11. 11.
    Arikan, O., Forsyth, D., O’Brien, J.: Motion synthesis from annotation. In: ACM Transaction on Graphics, SIGGRAPH 2003, July 2003, vol. 22(3), pp. 402–408 (2003)Google Scholar
  12. 12.
    Okwechime, D., Ong, E.J., Bowden, R.: Real-time motion control using pose space probability density estimation. In: IEE Int. Workshop on Human-Computer Interaction (2009)Google Scholar
  13. 13.
    Treuille, A., Lee, Y., Popovic, Z.: Near-optimal character animation with continuous control. In: Proceedings of SIGGRAPH 2007, vol. 26(3) (2007)Google Scholar
  14. 14.
    Rachel, H., Gleicher, M.: Parametric motion graph. In: 24th Int. Symposium on Interactive 3D Graphics and Games, pp. 129–136 (2007)Google Scholar
  15. 15.
    Shin, H., Oh, H.: Fat graphs: Constructing an interactive character with continuous controls. In: Proc. of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, vol. 298 (2006)Google Scholar
  16. 16.
    Balci, K., Akarun, L.: Generating motion graphs from clusters of individual poses. In: 24th Int. Symposium on Computer and Information Sciences, pp. 436–441 (2009)Google Scholar
  17. 17.
    Lee, J., Chai, J., Reitsma, P., Hodgins, J., Pollard, N.: Interactive control of avatars animated with human motion data. ACM Trans. on Graphics 21, 491–500 (2002)Google Scholar
  18. 18.
    Schödl, A., Szeliski, R., Salesin, D., Essa, I.: Video textures. In: Proc. of the 27th Annual Conf. on Computer Graphics and Interactive Techniques, SIGGRAPH 2000, pp. 489–498. ACM Press/Addison-Wesley Publishing Co., New York (2000)Google Scholar
  19. 19.
    Flagg, M., Nakazawa, A., Zhang, Q., Kang, S., Ryu, Y., Essa, I., Rehg, J.: Human video textures. In: Proc. of the 2009 Symposium on Interactive 3D Graphics and Games, pp. 199–206. ACM, New York (2009)CrossRefGoogle Scholar
  20. 20.
    Ekman, P., Friesen, W.: Facial action coding system. Consulting Psychologists Press, Palo Alto (1977)Google Scholar
  21. 21.
    Argyle, M.: Bodily communication. Methuen (1987)Google Scholar
  22. 22.
    Beaudoin, P., Coros, S., van de Panne, M., Poulin, P.: Motion-motif graphs. In: Proc. of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 117–126 (2008)Google Scholar
  23. 23.
    Pentland, A.: A computational model of social signaling. In: 18th Int. Conf. on Pattern Recognition, ICPR (2006)Google Scholar
  24. 24.
    Mertins, A., Rademacher, J.: Frequency-warping invariant features for automatic speech recognition. In: Proceedings of 2006 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2006, vol. 5 (2006)Google Scholar
  25. 25.
    Viola, P., Jones, M.: Rapid Object Detection using a Boosted Cascade of Simple Features. In: Proc. IEEE CVPR 2001 (2002)Google Scholar
  26. 26.
    Ong, E.J., Lan, Y., Theobald, B.J., Harvey, R., Bowden, R.: Robust facial feature tracking using selected multi-resolution linear predictors. In: Int. Conf. Computer Vision. ICCV 2009 (2009)Google Scholar
  27. 27.
    Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. In: Proc. of 20th Int. Conf. on Very Large Data Bases, VLDB 1994, pp. 487–499 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Dumebi Okwechime
    • 1
  • Eng-Jon Ong
    • 1
  • Andrew Gilbert
    • 1
  • Richard Bowden
    • 1
  1. 1.CVSSPUniversity of SurreySurreyUK

Personalised recommendations