Conversational Interaction Recognition Based on Bodily and Facial Movement

Conference paper

DOI: 10.1007/978-3-319-11758-4_26

Part of the Lecture Notes in Computer Science book series (LNCS, volume 8814)
Cite this paper as:
Deng J., Xie X., Zhou S. (2014) Conversational Interaction Recognition Based on Bodily and Facial Movement. In: Campilho A., Kamel M. (eds) Image Analysis and Recognition. ICIAR 2014. Lecture Notes in Computer Science, vol 8814. Springer, Cham

Abstract

We examine whether 3D pose and face features can be used to both learn and recognize different conversational interactions. We believe this to be among the first work devoted to this subject and show that this task is indeed possible with a promising degree of accuracy using both features derived from pose and face. To extract 3D pose we use the Kinect Sensor, and we use a combined local and global model to extract face features from normal RGB cameras. We show that whilst both of these features are contaminated with noises. They can still be used to effectively train classifiers. The differences in interaction among different scenarios in our data set are extremely subtle. Both generative and discriminative methods are investigated, and a subject specific supervised learning approach is employed to classify the testing sequences to seven different conversational scenarios.

Keywords

Human interaction modeling Conversantional interaction analysis 3D human pose Face analysis Randomized decision trees HMM SVM 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Computer ScienceSwansea UniversitySwanseaUK

Personalised recommendations