Comparison of Finite-Repertoire and Data-Driven Facial Expressions for Sign Language Avatars

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9176)

Abstract

To support our research on ASL animation synthesis, we have adopted and enhanced a new virtual human animation platform that provides us with greater fine-grained control of facial movements than our previous platform. To determine whether this new platform is sufficiently expressive to generate understandable ASL animations, we analyzed responses collected from deaf participants who evaluated four types of animations: generated by our old or new animation platform, and with or without facial expressions performed by the character. For animations without facial expressions, our old and new plat-forms had equivalent comprehension scores; for those with facial expressions, our new platform had higher scores. In addition, this paper demonstrates a methodology by which sign language animation researchers can document transitions in their animation platforms or avatar appearance. Performing such an evaluation enables future readers to compare published results over time, both before and after such a transition in animation technology.

Keywords

American sign language Accessibility technology for people who are deaf Facial expression Animation Evaluation User study 

References

  1. 1.
    A-V, N., Popescu, V., Lestina, J.: A non-expert-user interface for posing signing avatars. Disabil. Rehabil: Assistive Technol. 8(3), 238–248 (2013)CrossRefGoogle Scholar
  2. 2.
    Adamo-Villani, N., Wilbur, R.: Software for math and science education for the deaf. Disabil. Rehabil. Assistive Technol. 5(2), 115–124 (2010)CrossRefGoogle Scholar
  3. 3.
    Ebling, S., Glauert, J.: Exploiting the full potential of JASigning to build an avatar signing train announcements. In: 3rd International Symposium on Sign Language Translation and Avatar Technology (2013)Google Scholar
  4. 4.
    Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E.: Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2008). Springer, HeidelbergCrossRefGoogle Scholar
  5. 5.
    Filhol, M., Hadjadj, M.N., Choisier, A.: Non-manual features: the right to indifference. In: 6th Workshop on the Representation and Processing of Sign Language (LREC) (2014)Google Scholar
  6. 6.
    Fotinea, S.-E., Efthimiou, E., Dimou, A.-L.: Sign language computer-aided education: exploiting GSL resources and technologies for web deaf communication. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 237–244. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. 7.
    Gibet, S., Courty, N., Duarte, K., Naour, T.L.: The SignCom system for data-driven animation of interactive virtual signers: methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1(1), 6 (2011)CrossRefGoogle Scholar
  8. 8.
    Heloir. A., Nguyen, Q., Kipp, M.: Signing avatars: a feasibility study. In: 2nd International Workshop on Sign Language Translation and Avatar Technology (2011)Google Scholar
  9. 9.
    Heloir, A. Nunnari, F.: Towards an intuitive sign language animation authoring environment for the deaf. In: Proceedings of the 2nd Workshop in Sign Language Translation and Avatar Technology (2013)Google Scholar
  10. 10.
    Holt, J.A.: Stanford achievement test - 8th edition: reading comprehension subgroup results. Am. Ann. Deaf 138, 172–175 (1993)CrossRefGoogle Scholar
  11. 11.
    Huenerfauth, M.: Spatial and planning models of ASL classifier predicates for machine translation. In: The 10th International Conference on Theoretical and Methodological Issues in Machine Translation (2004)Google Scholar
  12. 12.
    Huenerfauth, M., Kacorri, H.: Release of experimental stimuli and questions for evaluating facial expressions in animations of American sign language. In: Proceedings of the 6th Workshop on the Representation and Processing of Sign Languages (LREC) (2014)Google Scholar
  13. 13.
    Huenerfauth, M., Lu, P., Rosenberg, A.: Evaluating importance of facial expression in American sign language and Pidgin signed english animations. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 99–106 (2011)Google Scholar
  14. 14.
    ISO/IECIS 14496-2 Visual (1999)Google Scholar
  15. 15.
    Kacorri, H., Huenerfauth, M.: Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions. In: Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 261–262 (2014)Google Scholar
  16. 16.
    Kacorri, H.: TR-2015001: A Survey and Critique of Facial Expression Synthesis in Sign Language Animation. Computer Science Technical Reports. Paper 403 (2015)Google Scholar
  17. 17.
    Kacorri, H., Lu, P., Huenerfauth, M.: Effect of displaying human videos during an evaluation study of american sign language animation. ACM Trans. Accessible Comput. 5(2), 4 (2013)CrossRefGoogle Scholar
  18. 18.
    Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the deaf user perspective on sign language avatars. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 107–114. ACM Press, New York (2011)Google Scholar
  19. 19.
    Mitchell, R., Young, T., Bachleda, B., Karchmer, M.: How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6(3), 306–335 (2006)CrossRefGoogle Scholar
  20. 20.
    Neidle, C., Kegl, D., MacLaughlin, D., Bahan, B., Lee, R.G.: The Syntax of ASL: Functional Categories and Hierarchical Structure. MIT Press, Cambridge (2000)Google Scholar
  21. 21.
    Pejsa, T., Pandzic, I.S.: Architecture of an animation system for human characters. In: Proceedings of the 10th International Conference on Telecommunications, pp. 171–176 (2009)Google Scholar
  22. 22.
    Schmidt, C., Koller, O., Ney, H., Hoyoux, T., Piater, J.: Enhancing gloss-based corpora with facial features using active appearance models. In: Proceedings of the 2nd Workshop in Sign Language Translation and Avatar Technology (2013)Google Scholar
  23. 23.
    Schuirmann, D.J.: A comparison of the two one-sided tests procedure and the power approach for assessing equivalence of average bioavailability. J. Pharmacokin Biopharm 15, 657–680 (1987)CrossRefGoogle Scholar
  24. 24.
    Smith, R., Nolan, B.: Manual evaluation of synthesised sign language avatars. In: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 57 (2013)Google Scholar
  25. 25.
    Traxler, C.: The Stanford achievement test, 9th edition: national norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5(4), 337–348 (2000)CrossRefGoogle Scholar
  26. 26.
    VCOM3D.: Homepage (2015). http://www.vcom3d.com/
  27. 27.
    Wolfe, R., Cook, P., McDonald, J.C., Schnepp, J.: Linguistics as structure in computer animation: toward a more effective synthesis of brow motion in American sign language. Sign Lang. Linguist. 14(1), 179–199 (2011)CrossRefGoogle Scholar
  28. 28.
    Zeshan, U.: Interrogative and Negative Constructions in Sign Languages. Oxford University Press, Oxford (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Doctoral Program in Computer Science, The Graduate CenterCity University of New York (CUNY)New YorkUSA
  2. 2.Golisano College of Computing & Information SciencesRochester Institute of Technology (RIT)New YorkUSA

Personalised recommendations