Conversational Interaction Recognition Based on Bodily and Facial Movement
- Cite this paper as:
- Deng J., Xie X., Zhou S. (2014) Conversational Interaction Recognition Based on Bodily and Facial Movement. In: Campilho A., Kamel M. (eds) Image Analysis and Recognition. ICIAR 2014. Lecture Notes in Computer Science, vol 8814. Springer, Cham
We examine whether 3D pose and face features can be used to both learn and recognize different conversational interactions. We believe this to be among the first work devoted to this subject and show that this task is indeed possible with a promising degree of accuracy using both features derived from pose and face. To extract 3D pose we use the Kinect Sensor, and we use a combined local and global model to extract face features from normal RGB cameras. We show that whilst both of these features are contaminated with noises. They can still be used to effectively train classifiers. The differences in interaction among different scenarios in our data set are extremely subtle. Both generative and discriminative methods are investigated, and a subject specific supervised learning approach is employed to classify the testing sequences to seven different conversational scenarios.
KeywordsHuman interaction modeling Conversantional interaction analysis 3D human pose Face analysis Randomized decision trees HMM SVM
Unable to display preview. Download preview PDF.