Incompleteness and Fragmentation: Possible Formal Cues to Cognitive Processes Behind Spoken Utterances
What may eventually connect engineers and linguists most is their common interest in language, more specifically language technology: engineers build more and more intelligent robots desirably communicating with humans through language. Linguists wish to verify their theoretical understanding of language and speech through practical implementations. Robotics is then a place for the two to meet. However, speech, especially within spontaneous communication seems to often withstand usual generalizations: the sounds you hear are not the sounds you describe in a laboratory, the words you read in a written text may be hard to identify by speech segmentation, the sequences of words that make up a sentence are often too fragmented to be considered a “real” sentence from a grammar book. Yet, humans communicate, and this is most often, successful. Typically this is achieved through cognition, where people not only use words, these are used in context. People also use words in semantic context, by combining voices and gestures , in a dynamically changing, multimodal situational context. Each individual does not simply pick out words from the flow of a verbal interaction, but also observes and reacts to other, using multimodal cues as a point of reference and inference making navigation in communication. It is reasonable to believe that participants in a multimodal communication event follow a set of general, partly innate rules based on a general model of communication. The model presented below interperate numerous forms of dialogue by uncovering their syntax , prosody and overall multimodality within the HuComTech corpus of Hungarian. The research aims at improving the robustness of the spoken form of natural language technology.
KeywordsSyntax Prosody Gestures Multimodality HuComTech
The research presented in this chapter was partly supported by project TÁMOP 4.2.2-C/11/1/KONV-2012-0002. Further support was received from NeDiMAH (Network for Digital Methods in the Arts and Humanities), a cross-European project of the European Science Foundation.
- 1.Siciliano, B., Sciavicco, L., Villani, L., Oriolo, G.: Robotics, Modelling, Planning and Control. Springer, New York (2010)Google Scholar
- 4.Kasher, A.: Modular speech act theory: research program and results [collection on the foundations of speech act theory]. In: Tsohatzidis, S.L.T. (ed.) Foundations of Speech Act Theory Philosophical and Linguistic Perspectives, pp. 312–322. Routledge, New York (1994)Google Scholar
- 5.Hunyadi, L.: Multimodal human-computer interaction technologies. Theoretical modeling and application in speech processing. Argumentum 7 313–329 (2011)Google Scholar
- 6.Hunyadi, L., Földesi, A., Szekrényes, I., Staudt, A., Kiss, H., Abuczki, Á., Bódog, A.: Az ember-gép kommunikáció elméleti-technológiai modellje és nyelvtechnológiai vonatkozásai [a theoretical-technological model of human-machine communication and its relation to language technology]. Általános Nyelvészeti Tanulmányok XXIV (2012) 265–310Google Scholar
- 7.Hunyadi, L., Szekrényes, I., Borbély, A., Kiss, H.: Annotation of spoken syntax in relation to prosody and multimodal pragmatics. Cognitive Infocommunications (CogInfoCom), pp. 537–541 (2012)Google Scholar
- 13.McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
- 14.Kiss, H.: A hucomtech audio adatbázis szintaktikai szintjének multimodális vizsgálata [the multimodal study of the syntactic level of the hucomtech audio database]. MSZNY 2014, pp. 27–38 (2014)Google Scholar
- 15.Alessandro, C., Mertens, P.: Prosogram: semi-automatic transcription of prosody based on a tonal perception model. In: Proceedings of the 2nd International Conference of Speech Prosody pp. 23–26 (2004)Google Scholar
- 16.Szekrényes, I.: Annotation and interpretation of prosodic data in the hucomtech corpus for multimodal user interfaces. J. Multimodal User Interfaces 8, 143–150 (2014)Google Scholar
- 17.Boersma, P.: Praat, a system for doing phonetics by computer. Glot Int. 5(9/10), 341–345 (2001)Google Scholar
- 18.Abuczki, Á.: A multimodal analysis of the sequential organization of verbal and nonverbal interaction. Argumentum 7, 261–279 (2011)Google Scholar