Analysis of Expressiveness of Portuguese Sign Language Speakers
Nowadays, there are several communication gaps that isolate deaf people in several social activities. This work studies the expressive- ness of gestures in Portuguese Sign Language (PSL) speakers and their differences between deaf and hearing people. It is a first effort towards the ultimate goal of understanding emotional and behaviour patterns among such populations. In particular, our work designs solutions for the following problems: (i) differentiation between deaf and hearing people, (ii) identification of different conversational topics based on body expressiveness, (iii) identification of different levels of mastery of PSL speakers through feature analysis. With these aims, we build up a complete and novel dataset that reveals the duo-interaction between deaf and hearing people under several conversational topics. Results show high recognition and classification rates.
KeywordsSupport Vector Machine Emotion Recognition Deaf People Body Expression Motion History Image
The authors would like to thank to the PSL experts, Ana and Paula, who helped to find the volunteer population, to the socio-psychologist team from the Faculdade de Psicologia e Ciências da Educação da Universidade do Porto who helped to define the sociological constraints of the database, to the Agrupamento de Escolas Eugenio de Andrade, Escola EB2/3 de Paranhos for providing the venue for acquisition of the videos of the database, and finally to Stephano Piana for his help with the EyesWeb platform.
- 1.Tomkins, S.: Affect Imagery Consciousness: Volume:II: The Negative Affects. Springer Series. Springer Publishing Company, New York (1963)Google Scholar
- 2.Ekman, P., Friesen, W.V.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)Google Scholar
- 8.Piana, S., Staglianó, A., Odone, A.C.A.: A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. In: IDGEI International Workshop (2013)Google Scholar
- 10.Cassell, J.: A framework for gesture generation and interpretation. In: Cipolla, R., Pentland, A. (eds.) Computer Vision in Human-Machine Interaction, pp. 191–215. Cambridge University Press, Cambridge (2000)Google Scholar
- 15.Jensenius, A.R.: Using motiongrams in the study of musical gestures. In: Proceedings of the International Computer Music Conference, pp. 499–502. Tulane University, New Orleans (2006)Google Scholar
- 16.Pinto da Costa, J., Sousa, R., Cardoso, J.: An all-at-once unimodal svm approach for ordinal classification. In: Ninth International Conference on Machine Learning and Applications (ICMLA 2010), pp. 59–64, December 2010Google Scholar
- 17.Zhao, Y., Karypis, G.: Evaluation of hierarchical clustering algorithms for document datasets. In: Proceedings of the Eleventh International Conference on Information and Knowledge Management, CIKM 2002, pp. 515–524. ACM, New York (2002)Google Scholar