Abstract
The purpose of this paper is to describe human motions and emotions that appear on real video images with compact and informative representations. We aimed to recognize expressive motions and analyze the relationship between human body features and emotions. We propose a new descriptor vector for expressive human motions inspired from the Laban movement analysis method (LMA), a descriptive language with an underlying semantics that allows to qualify human motion in its different aspects. The proposed descriptor is fed into a machine learning framework including, random decision forest, multi-layer perceptron and two multiclass support vector machines methods. We evaluated our descriptor first for motion recognition and second for emotion recognition from the analysis of expressive body movements. Preliminary experiments with three public datasets, MSRC-12, MSR Action 3D and UTkinect, showed that our model performs better than many existing motion recognition methods. We also built a dataset composed of 10 control motions (move, turn left, turn right, stop, sit down, wave, dance, introduce yourself, increase velocity, decrease velocity). We tested our descriptor vector and achieved high recognition performance. In the second experimental part, we evaluated our descriptor with a dataset composed of expressive gestures performed with four basic emotions selected from Russell’s Circumplex model of affect (happy, angry, sad and calm). The same machine learning methods were used for human emotions recognition based on expressive motions. A 3D virtual avatar was introduced to reproduce human body motions, and three aspects were analyzed (1) how expressed emotions are classified by humans, (2) how motion descriptor is evaluated by humans, (3) what is the relationship between human emotions and motion features.
Similar content being viewed by others
References
De Gelder, B.: Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Philos. Trans. R. Soc. B Biol. Sci. 364(1535), 3475–3484 (2009)
Russell, J.A.: Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115, 102–141 (1994)
Paul, E.: Facial Expressions. Handbook of Cognition and Emotion, vol. 16, pp. 301–320. Wiley-lackwell, New Jersey (2005)
Aviezer, H., Hassin, R., Ryan, J., Grady, C., Susskind, J., Anderson, A., Moscovitch, M., Bentin, S.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–32 (2008)
Aviezer, H., Bentin, S., Dudarev, V., Hassin, R.: The automaticity of emotional face-context integration. Emotion 11(6), 1406–14 (2011)
Ajili, I., Mallem, M., Didier, J.Y.: Robust human action recognition system using Laban movement analysis. Procedia Comput. Sci. Knowl. Based Intell. Inf. Eng. Syst. 112, 554–563 (2017)
Ajili, I., Mallem, M., Didier, J.Y.: Gesture recognition for humanoid robot teleoperation. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1115–1120 (2017)
Russel, James, A.: A circumplex model of affect. J. Personal. Soc. Psychol. 39(6), 1161 (1980)
Gong, D., Medioni, G., Zhao, X.: Structured time series analysis for human action segmentation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1414–1427 (2014)
Junejo, I.N., Junejo, K.N., Al Aghbari, Z.: Silhouette-based human action recognition using SAX-Shapes. Vis. Comput. 30(3), 259–269 (2014)
Jiang, X., Zhong, F., Peng, Q., Qin, X.: Online robust action recognition based on a hierarchical model. Vis. Comput. 30(9), 1021–1033 (2014)
Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103(1), 60–79 (2013)
Xia, L., Aggarwal, J.K.: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2834–2841 (2013)
Oreifej, O., Liu, Z.: HON4D: Histogram of oriented 4D normals for activity recognition from depth sequences. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 716–723 (2013)
Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’00, pp. 173–182 (2000)
Kapadia, M., Chiang, I.K., Thomas, T., Badler, N.I., Kider Jr., J.T.: Efficient motion retrieval in large motion databases. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 19–28 (2013)
Müller, M., Röder, T., Clausen, M.: Efficient content-based retrieval of motion capture data. ACM Trans. Graph. 24(3), 677–685 (2005)
Durupinar, F., Kapadia, M., Deutsch, S., Neff, M., Badler, N.: PERFORM: perceptual approach for adding OCEAN personality to human motion using laban movement analysis. ACM Trans. Graph. 36(1), 6 (2016)
Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), 1082–1089 (2005)
Xia, S., Wang, C., Chai, J., Hodgins, J.: Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans. Graph. 34(4), 119:1–119:10 (2015)
Yumer, M.E., Mitra, N.J.: Spectral style transfer for human motion between independent actions. ACM Trans. Graph. 35(4), 137:1–137:8 (2016)
Aristidou, A., Zeng, Q., Stavrakis, E., Yin, K., Cohen-Or, D., Chrysanthou, Y., Chen, B.: Emotion control of unstructured dance movements. In: Symposium on Computer Animation (2017)
Aristidou, A., Stavrakis, E., Papaefthimiou, M., Papagiannakis, G., Chrysanthou, Y.: Style-based motion analysis for dance composition. Vis. Comput. 1432–2315 (2017)
Rudolf, V.L., Lisa, U.: The Mastery of Movement. Mac Donald and Evans, Boston (1971)
Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., Scherer, K.: Toward a minimal representation of affective gestures. IEEE Trans. Affect. Comput. 2(2), 106–118 (2011)
Bouchar, D., Badler, N.: Semantic segmentation of motion capture using Laban movement analysis. In: Intelligent Virtual Agents, pp. 37–44. Springer, Berlin, Heidelberg (2007)
Samadani, A., Burton, S., Gorbet, R., Kulic, D.: Laban effort and shape analysis of affective hand and arm movements. In: Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 343–348 (2013)
Truong, A., Boujut, H., Zaharia, T.: Laban descriptors for gesture recognition and emotional analysis. Vis. Comput. 32(1), 83–98 (2016)
Aristidou, A., Charalambous, P., Chrysanthou, Y.: Emotion analysis and classification: understanding the performers’ emotions using the LMA entities. Comput. Graph. Forum. 34(6), 262–276 (2015)
Senecal, S., Cuel, L., Aristidou, A., Magnenat-Thalman, N.: Continuous body emotion recognition system during theater performances. Comput. Anim. Virtual Worlds 27(3–4), 311–320 (2016)
Cimen, G., Ilhan, H., Capin, T., Gurcay, H.: Classification of human motion based on affective state descriptors. Comput. Anim. Virtual Worlds. 24(3–4), 355–363 (2013)
Fothergill, S., Mentis, H., Kohli, P., Nowozin, S.: Instructing people for training gestural interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, pp. 1737–1746, ACM (2012)
Barber, C.B., Dobkin, D.P., Huhdanpaa, H.: The Quickhull algorithm for convex hulls. ACM Trans. Math. Softw. 22(4), 469–483 (1996)
Xia, L., Chen, C.-C., Aggarwal, J.K.: View invariant human action recognition using histograms of 3D joints. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 20–27 (2012)
Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 9–14 (2010)
Quinlan, J.R.: Learning With Continuous Classes, pp. 343–348. World Scientific, Singapore (1992)
Breiman, L.: Classification and Regression Trees, vol. 358. Wadsworth International Group, Calif (1984)
Diaz-Uriarte, R., Alvare de Andrés, S.: Gene selection and classification of microarray data using random forest. BMC Bioinf. 7(3), 3 (2006)
Hripcsak, G., Rothschild, A.S.: Technical brief: agreement, the F-measure, and reliability in information retrieval. JAMIA 12(3), 296–298 (2005)
Lehrmann, A.M., Gehler, P.V., Nowozin, S.: Efficient nonlinear Markov models for human motion. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1314–1321 (2014)
Song, Y., Morency, L.P., Davis, R.: Distribution-sensitive learning for imbalanced datasets. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2013)
Truong, A., Zaharia, T.: Dynamic gesture recognition with Laban movement analysis and hidden Markov models. In: Proceedings of the 33rd Computer Graphics International, CGI ’16, ACM, Greece, pp. 21–24 (2016)
Slama, R., Wannous, H., Daoudi, M.: Grassmannian representation of motion depth for 3D human gesture and action recognition. In: 22nd International Conference on Pattern Recognition, pp. 3499–3504 (2014)
Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection (2009)
Bland, J.M., Altman, D.G.: Statistics notes: Cronbach’s alpha. BMJ 314(7080), 572 (1997)
Knight, H., Thielstrom, R., Simmons, R.: Expressive path shape (swagger): simple features that illustrate a robot’s attitude toward its goal in real time. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1475–1482 (2016)
Nishimura, K., Kubota, N., Woo, J.: Design support system for emotional expression of robot partners using interactive evolutionary computation. In: IEEE International Conference on Fuzzy Systems, pp. 1–7 (2012)
Acknowledgements
We would like to thank the staff of the University of Evry Val d’Essonne for participating in our datasets. This work was partially supported by the Strategic Research Initiatives project iCODE accredited by University Paris Saclay.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Ajili, I., Mallem, M. & Didier, JY. Human motions and emotions recognition inspired by LMA qualities. Vis Comput 35, 1411–1426 (2019). https://doi.org/10.1007/s00371-018-01619-w
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-018-01619-w