Abstract
Our goal is to capture style from real human motion so it can be rendered with a virtual agent that represents this human user. We used expressivity parameters to describe motion style. As a first contribution, we propose an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial extent and temporal extent) from captured motion trajectories. Second, we capture the expressivity of real users and then output it to the Greta engine that animates a virtual agent representing the user. We experimentally demonstrate that expressivity can be another clue for identifiable virtual clones of real humans.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bavelas, J.B., Chovil, N.: Visible acts of meaning:an integrated message model of language in face to face, dialogue. 19, 163–494 (2000)
Boone, R.T., Cunningham, J.G.: Children’s decoding of emotion in expressive body movement: the development of cue attunement. Develop. Psychol. 34, 1007–1016 (1998)
Camurri, A., Castellano, G., Ricchetti, M., Volpe, G.: Subject interfaces: measuring bodily activation during an emotional experience of music. Gesture Hum.-Comput. Interact. Stimulat. 3881, 268–279 (2006)
Camurri, A., Lagerlöf, I., Volpe, G.: Recognizing emotion from dance movement: Comparison of spectator recognition and automated, techniques. Int. J. Hum.-Comput. Stud. 59, 213–225 (2003)
Chellappa, R., Roy-Chowdhury, A.K., Kale, A.: Human identification using gait and face. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, 2007. pp. 1–2
Craig, J.J.: Introduction to Robotics: Mechanics and Control, 3rd edn. Prentice hall, London (1986)
Davis, J.W., Gao, H.: An expressive three-mode principal components model of human action style. 21, 1001–1016 (2003)
Drosopoulos, A., Mpalomenos, T., Ioannou, S., Karpouzis, K., Kollias, S.: Emotionally-rich man-machine interaction based on gesture, analysis. Human Computer Interaction International, vol. 4, pp. 1372–1376 (2003)
Ekinci, M.: A new approach for human identification using gait recognition. In: Proc. of ICCSA, vol. 3, pp. 1216–1225, Glasgow (2006)
Elgammal, A., Lee, C.S.: Separating style and content on a nonlinear manifold. In: Computer Vision and, Pattern Recognition (CVPR), 2004. pp. 478–485
Fitts, P.M.: The information capacity of the human motor system in controlling the amplitude. J. Exp. Psychol. 47, 381–391 (1954)
Gallaher, P.E.: Individual differences in nonverbal behavior: Dimensions of style. 63, 133–145 (1992)
Gómez Jáuregui, D.A., Horain, P., Rajagopal, M.K., Karri, S.S.K.: Real-time particle filtering with heuristics for 3d motion capture by monocular vision. In: Proc of IEEE Conference on Multimedia Signal Processing. Saint-Malo, 2010. pp. 139–144
Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gesture Workshop, LNAI, pp. 188–199. Springer, Heidelberg (2005)
Hassin, R., James, U., Bargh, J. (eds.): The New Unconcious. Oxford University Press, Oxford (2005)
Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. Acm T. Graphic. 24, 1082–1089 (2005)
Kinect: Microsoft corporation (2010), http://www.xbox.com/en-GB/kinect
Lee, L., Grimson, E.: Gait analysis for recognition and classification. In: Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washington, 2002. pp. 734–742
McNeill, D.: Hand and Mind what gestures reveal about thought. The university press of chicago press, Chicago (1992)
Mancini, M., Bresin, R., Pelachaud, C.: A virtual-agent head driven by musical performance. IEEE Trans. Audio Speech Lang. Process. 15, 1883–1841 (2007)
Mehrabian, A., Wiener, M.: Decoding of inconsistent, communications. J. Pers. Soc. Psychol. 6, 109–114 (1967)
Meijer, M.: The contribution of general features of body movement to the attribution of emotions. J. Nonverbal Behav. 13, 247–268 (1989)
Noot, H., Ruttkay, Z.: The gestyle language. Int. J. Hum. Comput. Stud. 62, 211–229 (2005). (Special issue: Subtle expressivity for characters and robots)
Pelachaud, C., Poggi, I.: Subtleties of facial expressions in embodied agents. J. Vis. Comput. Anim. 13, 301–312 (2002)
Pelachaud, C.: Greta, http://perso.telecom-paristech.fr/~pelachau/Greta/
Quek, F., McNeill, D., Bryll, R., Kirbas, C., Arslan, H.: Gesture, speech, and gaze cues for discourse segmentation. Comput. Vis. Pattern Recognit. (CVPR). 2, 247–254 (2000)
Tenanbaum, J.B., Freeman, W.T.: Separating style and content with bilinear models. Neural Comput. 12, 1247–1283 (2000)
Vasilescu, M.O., Terzopoulos, D.: Multilinear analysis of image ensembles: Tensorfaces. In: Proceeding of the European Conference on Computer Vision (ECCV’02), Copengan, May 2002. pp. 447–460
Wallbott, H.G.: Bodily expression of emotion. Eur. J. Soc. Psychol. 28, 879–896 (1998)
Wallbott, H.G., Scherer, K.R.: Cues and channels in emotion recognition. J. Pers. Soc. Psychol. 51, 690–699 (1986). (American Psychological Association)
Wang, J.M., Fleet, D.J., Hertzmann, A.: Multifactor gaussian process models for style-content seperation. In: ICML, Corvallis, 2007. pp. 975–982
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rajagopal, M.k., Horain, P., Pelachaud, C. (2013). Virtually Cloning Real Human with Motion Style. In: Kudělka, M., Pokorný, J., Snášel, V., Abraham, A. (eds) Proceedings of the Third International Conference on Intelligent Human Computer Interaction (IHCI 2011), Prague, Czech Republic, August, 2011. Advances in Intelligent Systems and Computing, vol 179. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31603-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-31603-6_11
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31602-9
Online ISBN: 978-3-642-31603-6
eBook Packages: EngineeringEngineering (R0)