Collecting an American Sign Language Corpus through the Participation of Native Signers

  • Pengfei Lu
  • Matt Huenerfauth
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6768)


Animations of American Sign Language (ASL) can make more information, websites, and services accessible for the significant number of deaf people in the United States with lower levels of written language literacy – ultimately leading to fuller social inclusion for these users. We are collecting and analyzing an ASL motion-capture corpus of multi-sentential discourse to seek computational models of various aspects of ASL linguistics to enable us to produce more accurate and understandable ASL animations. In this paper, we will describe our motion-capture studio configuration, our data collection procedure, and the linguistic annotations being added by our research team of native ASL signers. This paper will identify the most effective prompts we have developed for collecting non-scripted ASL passages in which signers use particular linguistic constructions that we wish to study. This paper also describes the educational outreach and social inclusion aspects of our project – the participation of many deaf participants, researchers, and students.


American Sign Language animation accessibility technology for people who are deaf data collection social inclusion motion capture 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Mitchell, R., Young, T., Bachleda, B., Karchmer, M.: How Many People Use ASL in the United States? Sign Language Studies 6(3), 306–335 (2006)CrossRefGoogle Scholar
  2. 2.
    Traxler, C.: The Stanford achievement test, ninth edition: national norming and performance standards for deaf and hard-of-hearing students. J. Deaf Studies & Deaf Education 5(4), 337–348 (2000)CrossRefGoogle Scholar
  3. 3.
    Lu, P., Huenerfauth, M.: Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research. In: Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), Human Language Technologies / North American Association for Computational Linguistics Conference (HLT-NAACL), pp. 89–97 (2010) Google Scholar
  4. 4.
    Huenerfauth, M., Lu, P.: Eliciting spatial reference for a motion-capture corpus of American Sign Language discourse. In: Workshop on the Representation and Processing of Signed Languages, LREC, pp. 121–124 (2010)Google Scholar
  5. 5.
    Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., Lee, R.: The syntax of ASL: functional categories and hierarchical structure. MIT Press, Cambridge (2000)Google Scholar
  6. 6.
    Bungeroth, J., Stein, D., Dreuw, P., Zahedi, M., Ney, H.: A German sign language corpus of the domain weather report. In: Vettori, C. (ed.) Workshop on the Representation and Processing of Sign Languages, LREC, pp. 2000–2003 (2006)Google Scholar
  7. 7.
    Efthimiou, E., Fotinea, S.E.: GSLC: Creation and Annotation of a Greek Sign Language Corpus for HCI. In: Stephanidis, C. (ed.) HCI 2007. LNCS, vol. 4554, pp. 657–666. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  8. 8.
    Brashear, H., Starner, T., Lukowicz, P., Junker, H.: Using multiple sensors for mobile sign language recognition. In: IEEE International Symposium on Wearable Computers, p. 45. IEEE Press, New York (2003)Google Scholar
  9. 9.
    Cox, S., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M., Abbott, S.: Tessa, a system to aid communication with deaf people. In: ACM SIGACCESS Conference on Computers and Accessibility, pp. 205–212. ACM Press, New York (2002)Google Scholar
  10. 10.
    Lenseigne, B., Dalle, P.: A Tool for Sign Language Analysis through Signing Space Representation. In: Sign Language Linguistics and Application of Information Technology to Sign Languages Conference, Milan, Italy (2005)Google Scholar
  11. 11.
    Liddell, S.: Grammar Gesture and Meaning in American Sign Language. Cambridge University Press, Cambridge (2003)CrossRefGoogle Scholar
  12. 12.
    Meier, R.: Person Deixis in American Sign Language. In: Fischer, S., Siple, P. (eds.) Theoretical Issues in Sign Language Research. Linguistics, vol. 1, pp. 175–190. University of Chicago Press, Chicago (1990)Google Scholar
  13. 13.
    Huenerfauth, M.: Improving spatial reference in American Sign Language animation through data collection from native ASL signers. In: Stephanidis, C. (ed.) UAHCI 2009. LNCS, vol. 5616, pp. 530–539. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  14. 14.
    Supalla, T.: Morphology of Verbs of Motion and Location. In: Caccamise, F., Hicks, D. (eds.) 2nd Nat’l Symposium on Sign Language Research and Teaching, pp. 27–45 (1978)Google Scholar
  15. 15.
    Nishio, R., Hong, S., Konig, S., Konrad, R., Langer, G., Hanke, T., Rathmann, C.: Elicitation methods in the DGS (German Sign Language) corpus project. In: Workshop on the Representation and Processing of Signed Languages, LREC, pp. 178–185 (2010)Google Scholar
  16. 16.
    Huenerfauth, M.: Participation of high school and undergraduate students who are deaf in research on American Sign Language animation. In: ACM SIGACCESS Accessibility and Computing, vol. (97), pp. 14–24. ACM Press, New York (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Pengfei Lu
    • 1
  • Matt Huenerfauth
    • 2
  1. 1.Computer Science Doctoral Program, Graduate CenterThe City University of New York (CUNY)New YorkUSA
  2. 2.Computer Science Department, Queens CollegeThe City University of New York (CUNY)FlushingUSA

Personalised recommendations