Advertisement

Design and Evaluation of a User-Interface for Authoring Sentences of American Sign Language Animation

  • Abhishek Kannekanti
  • Sedeeq Al-khazraji
  • Matt HuenerfauthEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11572)

Abstract

Many individuals who are Deaf or Hard of Hearing (DHH) in the U.S. have lower English language literacy levels than their hearing peers, which creates a barrier to access web content for these users. In the present study we determine a usable interface experience for authoring sentences (or multi-sentence messages) in ASL (American Sign Language) using the EMBR (Embodied Agents Behavior Realizer) animation platform. Three rounds of iterative designs were produced through participatory design techniques and usability testing, to refine the design, based on feedback from 8 participants familiar with creating ASL animations. Later, a usability testing session was conducted with four participants on the final iteration of the designs. We found that participants expressed a preference for a “timeline” layout for arranging words to create a sentence, with a dual view of the word-level and the sub-word “pose” level. This paper presents the details of the design stages of the new GUI, the results, and directions for future work.

Keywords

Human computer interaction American Sign Language Authoring tools Timeline 

Notes

Acknowledgements

This material is based upon work supported by the National Science Foundation under award number 1400802 and 1462280.

References

  1. 1.
    Al-khazraji S., Berke, L., Kafle, S., Yeung P., Huenerfauth, M.: Modeling the speed and timing of American Sign Language to generate realistic animations. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2018), pp. 259–270. ACM, New York (2018)Google Scholar
  2. 2.
    Elliott, R., Glauert, J., Kennaway, J., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for Avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2006)CrossRefGoogle Scholar
  3. 3.
    Goldberg, D., Looney, D., Lusin, N.: Enrollments in languages other than English in United States Institutions of Higher Education, Fall 2013. In Modern Language Association. Modern Language Association, New York (2015)Google Scholar
  4. 4.
    Hanke, T.: HamNoSys-representing sign language data in language resources and language processing contexts. LREC 4, 1–6 (2004)Google Scholar
  5. 5.
    Heloir, A., Nguyen, Q., Kipp, M.: Signing avatars: a feasibility study. In: 2nd International Workshop on Sign Language Translation and Avatar Technology (SLTAT) (2011)Google Scholar
  6. 6.
    Huenerfauth, M., Hanson, V.: Sign language in the interface: access for deaf signers. In: Universal Access Handbook. Erlbaum, NJ, p. 38 (2009) Google Scholar
  7. 7.
    Huenerfauth, M., Kacorri, H.: Augmenting EMBR virtual human animation system with MPEG-4 controls for producing ASL facial expressions. In: The Fifth International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Paris, France (2015)Google Scholar
  8. 8.
    Huenerfauth, M., Lu, P., Rosenberg, A.: Evaluating importance of facial expression in American Sign Language and Pidgin Signed English animations. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 99–106. ACM, New York (2011)Google Scholar
  9. 9.
    Kacorri, H., Lu, P., Huenerfauth, M.: Evaluating facial expressions in American Sign Language animations for accessible online information. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2013. LNCS, vol. 8009, pp. 510–519. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-39188-0_55CrossRefGoogle Scholar
  10. 10.
    Mitchell, R.: How Many Deaf People are There in the United States. Gallaudet Research Institute, Graduate School and Professional Programs, Gallaudet University, Washington, DC (2004)Google Scholar
  11. 11.
    Morrissey, S., Way, A.: Manual labour: tackling machine translation for sign languages. Mach. Transl. 27(1), 25–64 (2013)CrossRefGoogle Scholar
  12. 12.
    Newkirk, D.: SignFont Handbook. Emerson and Associates, San Diego (1987)Google Scholar
  13. 13.
    Sutton, V.: The SignWriting Literacy Project. Presented at the Impact of Deafness on Cognition AERA Conference, San Diego, CA (1998)Google Scholar
  14. 14.
    Traxler, C.B.: The Stanford Achievement Test: national norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Educ. 5(4), 337–348 (2000)CrossRefGoogle Scholar
  15. 15.
    VCom3D. Sign Smith Studio. http://www.vcom3d.com/language/sign-smith-studio/. Accessed 10 July 2018
  16. 16.
    World Federation of the Deaf. WFD and WASLI Issue Statement on Signing Avatars. https://wfdeaf.org/news/wfd-wasli-issue-statement-signing-avatars/. Accessed 15 Jan 2019
  17. 17.
    Yi, B., Wang, X., Harris, F.C., Dascalu, S.M.: sEditor: a prototype for a sign language interfacing system. IEEE Trans. Hum. Mach. Syst. 44(4), 499–510 (2014)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Abhishek Kannekanti
    • 1
  • Sedeeq Al-khazraji
    • 1
    • 2
  • Matt Huenerfauth
    • 1
    Email author
  1. 1.Rochester Institute of TechnologyRochesterUSA
  2. 2.University of MosulMosulIraq

Personalised recommendations