Methodological Foundation for Sign Language 3D Motion Trajectory Analysis

  • Mehrez Boulares
  • Mohamed Jemni
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7619)


Current researches in sign language computer recognition, aim to recognize signs from video content. The majority of existing studies of sign language recognition from video-based scenes use classical learning approach due to their acceptable results. HMM, Neural Network, Matching techniques or Fuzzy classifier; are very used in video recognition with large training data. Up to day, there is a considerable progress in animation generation field. These tools contribute to improve the accessibility to information and to services for deaf individuals with low literacy level. They rely mainly on 3D-based content standard (X3D) in their sign language animation. Therefore, signs animations are becoming common. However in this new field, there are few works that try to apply the classical learning techniques for sign language recognition from 3D-based content. The majority of studies rely on positions or rotations of virtual agent articulations as training data for classifiers or for matching techniques. Unfortunately, existing animation generation software use different 3D virtual agent content, therefore, articulation positions or rotations differ from system to other. Consequently this recognition method is not efficient.

In this paper, we propose a methodological foundation for future research to recognize signs from any sign language 3D content. Our new approach aims to provide an invariant to sign position changes method based on 3D motion trajectory analysis. Our recognition experiments were based on 900 ASL signs using Microsoft kinect sensor to manipulate our X3D virtual agent. We have successfully recognized 887 isolated signs with 98.5 recognition rate and 0.3 second as recognition response time.


X3D 3D content Recognition 3D Motion Trajectory Analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Annelies, B., Jean-Paul, S., Jean-Claude, M., Cyril, V.: Diva, une architecture pour le support des agents gestuels interactifs sur internet. Technique et Science Informatiques 29(7), 777–806 (2010)CrossRefGoogle Scholar
  2. 2.
    Charles, A., Rebecca, S.: Reading optimally builds on spoken language implication for deaf readers. Learning research and development center University of Pittsburgh (2000)Google Scholar
  3. 3.
    Chung-Lin, H., Bo-Lin, T.: A vision-based Taiwanese sign language Recognition. In: Pattern Recognition (ICPR) Istanbul, Turkey, August 23-27, pp. 3683–3686 (2010)Google Scholar
  4. 4.
    Ehrhardt, U., Davies, B.: A good introduction to the work of the esign project. Esign Deliverable D7-2 (August 2004)Google Scholar
  5. 5.
    Fotinea, S.-E., Efthimiou, E., Caridakis, G., Karpouzis, K.: A knowledge-based sign synthesis architecture. Universal Access in the Information Society 6(4), 405–418 (2008)CrossRefGoogle Scholar
  6. 6.
    Fun Li, K., Lothrop, K., Gill, E., Lau, S.: A Web-Based Sign Language Translator Using 3D Video Processing. In: Proceedings of the 2011 14th International Conference on Network-Based Information Systems, NBIS 2011 (2011)Google Scholar
  7. 7.
    Gao, W., Fang, G., Zhao, D., Chen, Y.: A Chinese sign language recognition system based on SOFM/SRN/HMM. Pattern Recognition 37, 2389–2402 (2004)zbMATHGoogle Scholar
  8. 8.
    Kelly, D., McDonal, J., Markham, C.: Evaluation of threshold model HMMS and Conditional Random Fields for recognition of spatiotemporal gestures in sign language. In: IEEE 12th International Conference on Computer Vision Workshops, pp. 490–497 (2009)Google Scholar
  9. 9.
    Kennaway, J., Glaubert, J.: Providing signed content on the internet by synthesized animation. ACM Transaction (2007), doi: 10.1145/1279700.1279705Google Scholar
  10. 10.
    Li, C., Prabhakaran, B.: A similarity measure for motion stream segmentation and recognition. In: Proc. of the 6th Inter. Workshop on Multimedia Data Mining (2005)Google Scholar
  11. 11.
    Papadogiorgaki, M., Grammalidis, N., Makris, L., Sarris, N., Strintzis, M.G.: vSIGN project. Communication (September 20, 2002),
  12. 12.
    Platt, J.C.: Probabilistic outputs for support vector machines and comparison to reg-ularized likelihood methods. In: Advances in Large Margin Classifiers (2000)Google Scholar
  13. 13.
    Rashid, O., Al-Hamadi, A., Michaelis, B.: Utilizing Invariant Descriptors for Finger Spelling American Sign Language Using SVM. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Chung, R., Hammoud, R., Hussain, M., Kar-Han, T., Crawfis, R., Thalmann, D., Kao, D., Avila, L. (eds.) ISVC 2010, Part I. LNCS, vol. 6453, pp. 253–263. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Stein, D., Bungeroth, J., Ney, H.: Morpho-syntax based statistical methods for sign language translation. In: Proceedings of the European Association for Machine Translation, Allschwil, Switzerland, pp. 169–177 (2006)Google Scholar
  15. 15.
    Sumaira, K., Younus, M.: Recognition of gestures in Pakistani sign language using fuzzy classifier. In: 8th Conference on Signal Processing, Computational Geometry and Artificial Vision 2008, Rhodes, Greece, pp. 101–105 (2008)Google Scholar
  16. 16.
    Vapnik, V.N.: Statistical Learning theory. Wiley, New York (1998)zbMATHGoogle Scholar
  17. 17.
    Wheatley, M., Pabsch, A.: Sign Language in Europe. In: 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, pp. 251–255. LREC, Malta (2010)Google Scholar
  18. 18.
    Yang, R., Sarkar, S., Loeding, B.: Handling Movement Epenthesis and Hand Segmentation Ambiguities in Continuous Sign Language Recognition Using Nested Dynamic Programming. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(3), 462–477 (2012)CrossRefGoogle Scholar
  19. 19.
    Zafrulla, Z., Brashear, H., Starner, T., Hamilton, H., Presti, P.: American Sign Language Recognition with the Kinect. In: ICMI 2011, Alicante, Spain, November 14–18. ACM (2011) 978-1-4503-0641-6Google Scholar
  20. 20.
    Jemni, M., Ghoul, O.E.: An avatar based approach for automatic interpretation of text to Sign language. In: 9th European Conference for the Advancement of the Assistive Technologies in Europe, San Sebastién, Spain, October 3-5 (2007)Google Scholar
  21. 21.
    Boulares, M., Jemni, M.: Mobile sign language translation system for deaf community. In: Proceedings of the International Cross-Disciplinary Conference on Web Accessibility, p. 37 (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Mehrez Boulares
    • 1
  • Mohamed Jemni
    • 1
  1. 1.Research Laboratory of Technologies of Information and Communication & Electrical Ingineering (LaTICE)Ecole Supérieure des Sciences et Techniques de TunisTunisTunisia

Personalised recommendations