Skip to main content

Advertisement

Log in

Movements and Holds in Fluent Sentence Production of American Sign Language: The Action-Based Approach

Cognitive Computation Aims and scope Submit manuscript

Abstract

The importance of bodily movements in the production and perception of communicative actions has been shown for the spoken language modality and accounted for by a theory of communicative actions (Cogn. Process. 2010;11:187–205). In this study, the theory of communicative actions was adapted to the sign language modality; we tested the hypothesis that in the fluent production of short sign language sentences, strong-hand manual sign actions are continuously ongoing without holds, while co-manual oral expression actions (i.e. sign-related actions of the lips, jaw, and tip of the tongue) and co-manual facial expression actions (i.e. actions of the eyebrows, eyelids, etc.), as well as weak-hand actions, show considerable holds. An American Sign Language (ASL) corpus of 100 sentences was analyzed by visually inspecting each frame-to-frame difference (30 frames/s) for separating movement and hold phases for each manual, oral, and facial action. Excluding fingerspelling and signs in sentence-final position, no manual holds were found for the strong hand (0%; the weak hand is not considered), while oral holds occurred in 22% of all oral expression actions and facial holds occurred for all facial expression actions analyzed (100%). These results support the idea that in each language modality, the dominant articulatory system (vocal tract or manual system) determines the timing of actions. In signed languages, in which manual actions are dominant, holds occur mainly in co-manual oral and co-manual facial actions. Conversely, in spoken language, vocal tract actions (i.e. actions of the lips, tongue, jaw, velum, and vocal folds) are dominant; holds occur primarily in co-verbal manual and co-verbal facial actions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Ambadar Z, Schooler J, Cohn JF. Deciphering the enigmatic face: the importance of facial dynamics to interpreting subtle facial expressions. Psychol Sci. 2005;16:403–10.

    Article  PubMed  Google Scholar 

  2. Bauer D, Kannampuzha J, Kröger BJ. Articulatory speech re-synthesis: profiting from natural acoustic speech data. In: Esposito A, Vich R, editors. Cross-modal analysis of speech, gestures, gaze and facial expressions, LNAI 5641. Berlin: Springer; 2009. p. 344–55.

    Chapter  Google Scholar 

  3. Boston-200-Sentences-ASL-Corpus of the National Center for Sign Languages and Gesture Resources at Boston University. 2000. http://www.bu.edu/asllrp/cslgr/.

  4. Browman C, Goldstein L. Articulatory gestures as phonological units. Phonology. 1989;6:201–51.

    Article  Google Scholar 

  5. Browman C, Goldstein L. Articulatory phonology: an overview. Phonetica. 1992;49:155–80.

    Article  PubMed  CAS  Google Scholar 

  6. Cohn JF. Foundations of human computing: facial expression and emotion. In: Huang TS, Nijholt A, Pantic M, Pentland A, editors. Artifical intelligence for human computing (LNAI 4451). Berlin: Springer; 2007. p. 1–16.

    Chapter  Google Scholar 

  7. Cohn JF, Ambadar Z, Ekman P. Observer-based measurement of facial expression with the facial action coding system. In: Coan JA, Allen JJB, editors. Handbook of emotion elicitation and assessment. New York: Oxford University Press; 2007. p. 203–21.

    Google Scholar 

  8. Dreuw P, Rybach D, Deselaers T, Zahedi M, Ney H. Speech Recognition Techniques for a Sign Language Recognition System. 2007. Proceedings of Interspeech 2007 (Antwerp, Belgium). pp. 2513–2516.

  9. Ekman P, Friesen WV. Measuring facial movement. Environ Psychol Nonverbal Behavior. 1976;1:56–75.

    Article  Google Scholar 

  10. Ekman P, Friesen WV. Facial action coding system. Palo Alto, CA: Consulting Psychologists Press; 1978.

    Google Scholar 

  11. Emmorey K. Language, cognition, and the brain: insights from sign language research. Lawrence Erlbaum Associates; 2002.

  12. Fontana S. Mouth actions as gesture in sign language. Gesture. 2008;8:104–23.

    Article  Google Scholar 

  13. Goldin-Meadow S. Hearing gesture. Cambridge, London: Belknap & Harvard University Press; 2003.

    Google Scholar 

  14. Goldstein L, Byrd D, Saltzman E. The role of vocal tract action units in understanding the evolution of phonology. In: Arbib MA, editor. Action to language via the mirror neuron system. Cambridge: Cambridge University Press; 2006. p. 215–49.

    Chapter  Google Scholar 

  15. Goldstein L, Pouplier M, Chen L, Saltzman L, Byrd D. Dynamic action units slip in speech production errors. Cognition. 2007;103:386–412.

    Article  PubMed  Google Scholar 

  16. Kendon A. Language and gesture: unity or duality? In: McNeill D, editor. Language and gesture. Cambridge: Cambridge University Press; 2000. p. 47–63.

    Chapter  Google Scholar 

  17. Kendon A. Gesture: visible action as utterance. New York: Cambridge University Press; 2004.

    Google Scholar 

  18. Klima E, Bellugi U. The signs of language. Cambridge, MA: Harvard University Press; 1979.

    Google Scholar 

  19. Kopp S, Wachsmuth I. Synthesizing multimodal utterances for conversational agents. J Comput Animat Virtual Worlds. 2004;15:39–51.

    Article  Google Scholar 

  20. Kröger BJ, Birkholz P. A gesture-based concept for speech movement control in articulatory speech synthesis. In: Esposito A, Faundez-Zanuy M, Keller E, Marinaro M, editors. Verbal and nonverbal communication behaviours, LNAI 4775. Berlin: Springer; 2007. p. 174–89.

    Chapter  Google Scholar 

  21. Kröger BJ, Birkholz P. Articulatory Synthesis of Speech and Singing: State of the Art and Suggestions for Future Research. In: Esposito A, Hussain A, Marinaro M, editors. Multimodal signals: cognitive and algorithmic issues. LNAI 5398. Berlin: Springer; 2009. p. 306–19.

    Chapter  Google Scholar 

  22. Kröger BJ, Kannampuzha J, Neuschaefer-Rube C. Towards a neurocomputational model of speech production and perception. Speech Commun. 2009;51:793–809.

    Article  Google Scholar 

  23. Kröger BJ, Kopp S, Lowit A. A model for production, perception, and acquisition of actions in face-to-face communication. Cogn Process. 2010;11:187–205.

    Google Scholar 

  24. Lausberg H, Sloetjes H. Coding gestural behavior with the NEUROGES-ELAN system. Behav Res Meth. 2009;41(3):841–9.

    Article  Google Scholar 

  25. Liberman AM, Mattingly IG. The motor theory of speech perception revised. Cognition. 1985;21:1–36.

    Article  PubMed  CAS  Google Scholar 

  26. Liddell SK, Johnson RE. American sign language: the phonological base. Sign Lang Stud. 1989;64:195–277.

    Google Scholar 

  27. Liddell SK, Metzger M. Gesture in sign language discourse. J Pragmat. 1998;30:657–97.

    Article  Google Scholar 

  28. Liddell SK. Grammar, gesture and meaning in American sign language. New York: Cambridge University Press; 2003.

    Google Scholar 

  29. McNeill D. Hand and mind: what gestures reveal about thought. Chicago: University of Chicago Press; 1992.

    Google Scholar 

  30. McNeill D. Gesture and thought. Chicago: University of Chicago Press; 2005.

    Google Scholar 

  31. McNeill D, Quek F, McCullough K-E, Duncan SD, Furuyama N, Bryll R, Ansari R. Catchments, prosody and discourse. Gesture. 2001;1(1):9–33.

    Article  Google Scholar 

  32. Perlmutter DM. Sonority and syllable structure in American sign language. Linguist Inquiry. 1992;23:407–42.

    Google Scholar 

  33. Saltzman E, Byrd D. Task-dynamics of gestural timing: Phase windows and multifrequency rhythms. Hum Mov Sci. 2000;19:499–526.

    Article  Google Scholar 

  34. Sandler W. Symbolic symbolization by hand and mouth in sign language. Semiotica. 2009;174:241–75.

    Article  Google Scholar 

  35. Schmidt KL, Ambadar Z, Cohn JF, Reed LI. Movement differences between deliberate and spontaneous facial expressions: zygomaticus major action in smiling. J Nonverbal Behav. 2006;30:37–52.

    Article  PubMed  Google Scholar 

  36. Schmidt KL, Bhattacharya S, Denlinger R. Comparison of deliberate and spontaneous facial movement in smiles and eyebrow raises. J Nonverbal Behav. 2009;33:35–45.

    Article  PubMed  Google Scholar 

  37. Schmidt KL, Cohn JF, Tian Y. Signal characteristics of spontaneous facial expressions: automatic movement in solitary and social smiles. Biol Psychol. 2003;65:49–66.

    Article  PubMed  Google Scholar 

  38. Stokoe WC (1960) Sign language structure: An outline of the visual communication systems of the American Deaf, Studies in Linguistics Occasional Paper 8, University of Buffalo.

  39. Tian YL, Kanade T, Cohn JF. Facial expression analysis. In: Li SZ, Jain AK, editors. Handbook of face recognition. New York: Springer; 2005. p. 247–75.

    Chapter  Google Scholar 

  40. Valli C, Lucas C. Linguistics of American sign language. An Introduction. Washington: Gallaudet University Press; 2000.

    Google Scholar 

  41. Vanger P, Hoenlinger R, Haken H. Computer aided generation of prototypical facial expressions of emotion. Methods of Psychological Research Online. 1998. Vol. 3, No. 1. http://www.dgps.de/fachgruppen/methoden/mpr-online.

  42. Wilcox S, Morford JP. Empirical methods in signed language research. In: Gonzalez-Marquez M, Mittelberg I, Coulson S, Spivey MJ, editors. Methods in cognitive linguistics. Amsterdam/Philadelphia: John Benjamins; 2007. p. 171–200.

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the German Research Council DFG grant Kr 1439/13-1 and grant Kr 1439/15-1. We thank the two anonymous reviewers for their helpful comments on earlier versions of this paper, and we thank Cigdem Capaat for labeling the ASL corpus.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernd J. Kröger.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kröger, B.J., Birkholz, P., Kannampuzha, J. et al. Movements and Holds in Fluent Sentence Production of American Sign Language: The Action-Based Approach. Cogn Comput 3, 449–465 (2011). https://doi.org/10.1007/s12559-010-9071-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-010-9071-2

Keywords

Navigation