Abstract
In this paper, we propose a virtual agent application. We develop a virtual agent that reacts to gestures and a virtual environment in which it can interact with the user. We capture motion with a Kinect V2 camera, predict the end of the motion and then classify it. The application also features a facial expression recognition module. In addition, to all these modules, we include also OpenAI conversation module. The application can also be used with a virtual reality headset.
This work was supported by French government funding managed by the National Research Agency under grants ANR-21-ESRE-0030 (CONTINUUM), by CNRS through the 80-Prime program, and ANR-16-IDEX-0004 ULNE.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bazarevsky, V., Grishchenko, I., Raveendran, K., Zhu, T.L., Zhang, F., Grundmann, M.: Blazepose: On-device real-time body pose tracking. ArXiv abs/2006.10204 (2020)
Bazarevsky, V., Kartynnik, Y., Vakunov, A., Raveendran, K., Grundmann, M.: Blazeface: Sub-millisecond neural face detection on mobile gpus. ArXiv abs/1907.05047 (2019)
Cao, Z., Hidalgo Martinez, G., Simon, T., Wei, S., Sheikh, Y.A.: Openpose: realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43. 172–186 (2019)
Chopin, B., Otberdout, N., Daoudi, M., Bartolo, A.: 3-D skeleton-based human motion prediction with manifold-aware GAN. IEEE Trans. Biom. Behav. Identity Sci. 5(3), 321–333 (2023). https://doi.org/10.1109/TBIOM.2022.3215067
Daoudi, M., Coello, Y., Descrosiers, P.A., Ott, L.: A new computational approach to identify human social intention in action. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 512–516 (2018)
DiPaola, S., Yalçın, O.N.: A multi-layer artificial intelligence and sensing based affective conversational embodied agent p. 2
Drira, H., Ben Amor, B., Srivastava, A., Daoudi, M., Slama, R.: 3D face recognition under expressions, occlusions, and pose variations. PAMI 35(9), 2270–2283 (2013)
Hua, M., Shi, F., Nan, Y., Wang, K., Chen, H., Lian, S.: Towards more realistic human-robot conversation: A seq2seq-based body gesture interaction system. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1393–1400 (2019). https://doi.org/10.1109/IROS40897.2019.8968038
Kartynnik, Y., Ablavatski, A., Grishchenko, I., Grundmann, M.: Real-time facial surface geometry from monocular video on mobile gpus. In: CVPR Workshop on Computer Vision for Augmented and Virtual Reality 2019. Long Beach, CA (2019). https://arxiv.org/abs/1907.06724
Kossaifi, J., Tzimiropoulos, G., Todorovic, S., Pantic, M.: Afew-va database for valence and arousal estimation in-the-wild. Image Vision Comput. 65 (02 2017). https://doi.org/10.1016/j.imavis.2017.02.001
Luo, C., Song, S., Xie, W., Spitale, M., Shen, L., Gunes, H.: Reactface: multiple appropriate facial reaction generation in dyadic interactions. CoRR abs/2305.15748 (2023)
Srivastava, A., Klassen, E., Joshi, S.H., Jermyn, I.H.: Shape analysis of elastic curves in Euclidean spaces. PAMI 33(7), 1415–1428 (2011)
Wang, H., Gaddy, V., Beveridge, J.R., Ortega, F.R.: Building an emotionally responsive avatar with dynamic facial expressions in human-computer interactions. Multimodal Technol. Interact. 5(3) (2021)
Acknowledgments
The authors would also like to thank Hugo Pina Borges and Julian Hutin for performing part of the experimentation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chopin, B., Daoudi, M., Bartolo, A. (2024). Avatar Reaction to Multimodal Human Behavior. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing - ICIAP 2023 Workshops. ICIAP 2023. Lecture Notes in Computer Science, vol 14365. Springer, Cham. https://doi.org/10.1007/978-3-031-51023-6_41
Download citation
DOI: https://doi.org/10.1007/978-3-031-51023-6_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-51022-9
Online ISBN: 978-3-031-51023-6
eBook Packages: Computer ScienceComputer Science (R0)