Annotated corpora have played a critical role in speech and natural language research; and, there is an increasing interest in corpora-based research in sign language and gesture as well. We present a non-semantic, geometrically-based annotation scheme, FORM, which allows an annotator to capture the kinematic information in a gesture just from videos of speakers. In addition, FORM stores this gestural information in Annotation Graph format—allowing for easy integration of gesture information with other types of communication information, e.g., discourse structure, parts of speech, intonation information, etc.
KeywordsGesture annotation corpora corpus-based methods multimodal communication
Unable to display preview. Download preview PDF.
- Bird, S. and Liberman, M. (1999). A Formal Framework for Linguistic Annotation. Technical Report MS-CIS-99-01, Department of Computer and Information Sciences, University of Pennsylvania, Philadelphia, Pennsylvania. http://citeseer.nj.nec.com/article/bird99formal.html.Google Scholar
- Cassell, J., Vilhjálmsson, H. H., and Bickmore, T. (2001). BEAT: The Behavior Expression Animation Toolkit. In Fiume, E., editor, Proceedings of SIGGRAPH, pages 477–486. ACM Press / ACM SIGGRAPH. http://citeseer.ist.psu.edu/cassell01beat.html.Google Scholar
- Kendon, A. (1996). An Agenda for Gesture Studies. Semiotic Review of Books, 7(3):8–12.Google Scholar
- Kendon, A. (2000). Suggestions for a Descriptive Notation for Manual Gestures. Unpublished.Google Scholar
- Kipp, M. (2001). Anvil-A Generic Annotation Tool for Multimodal Dialogue. In Proceedings of Eurospeech 2001, pages 1367–1370, Aalborg, Denmark.Google Scholar
- MacWhinney, B. (1996). The CHILDES System. American Journal of Speech-Language Pathology, 5:5–14.Google Scholar
- Martell, C. (2002). FORM: An Extensible, Kinematically-based Gesture Annotation Scheme. In Proceedings of International Language Resources and Evaluation Conference (LREC), pages 183–187. European Language Resources Association. http://www.ldc.upenn.edu/Projects/FORM.Google Scholar
- McNeill, D. (1992). Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago, USA.Google Scholar
- Neidle, C., Sclaroff, S., and Athitsos, V. (2001). SignStream: A Tool for Linguistic and Computer Vision Research on Visual-Gestural Language Data. In Behavior Research Methods, Instruments, and Computers, volume 33:3, pages 311–320. Psychonomic Society Publications. http://www.bu.edu/asllrp/SignStream/.Google Scholar
- Quek, F., Bryll, R., McNeill, D., and Harper, M. (2001). Gestural Origo and Loci-Transitions in Natural Discourse Segmentation. Technical Report VISLab-01-12, Department of Computer Science and Engineering, Wright State University. http://vislab.cs.vt.edu/Publications/2001/QueBMH01.html.Google Scholar