Advertisement

Incompleteness and Fragmentation: Possible Formal Cues to Cognitive Processes Behind Spoken Utterances

Chapter
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 42)

Abstract

What may eventually connect engineers and linguists most is their common interest in language, more specifically language technology: engineers build more and more intelligent robots desirably communicating with humans through language. Linguists wish to verify their theoretical understanding of language and speech through practical implementations. Robotics is then a place for the two to meet. However, speech, especially within spontaneous communication seems to often withstand usual generalizations: the sounds you hear are not the sounds you describe in a laboratory, the words you read in a written text may be hard to identify by speech segmentation, the sequences of words that make up a sentence are often too fragmented to be considered a “real” sentence from a grammar book. Yet, humans communicate, and this is most often, successful. Typically this is achieved through cognition, where people not only use words, these are used in context. People also use words in semantic context, by combining voices and gestures , in a dynamically changing, multimodal situational context. Each individual does not simply pick out words from the flow of a verbal interaction, but also observes and reacts to other, using multimodal cues as a point of reference and inference making navigation in communication. It is reasonable to believe that participants in a multimodal communication event follow a set of general, partly innate rules based on a general model of communication. The model presented below interperate numerous forms of dialogue by uncovering their syntax , prosody and overall multimodality within the HuComTech corpus of Hungarian. The research aims at improving the robustness of the spoken form of natural language technology.

Keywords

Syntax Prosody Gestures Multimodality HuComTech 

Notes

Acknowledgments

The research presented in this chapter was partly supported by project TÁMOP 4.2.2-C/11/1/KONV-2012-0002. Further support was received from NeDiMAH (Network for Digital Methods in the Arts and Humanities), a cross-European project of the European Science Foundation.

References

  1. 1.
    Siciliano, B., Sciavicco, L., Villani, L., Oriolo, G.: Robotics, Modelling, Planning and Control. Springer, New York (2010)Google Scholar
  2. 2.
    Markou, M., Singh, S.: Novelty detection: a review–part1: statistical approaches. Signal Process. 83, 2481–2497 (2003)CrossRefMATHGoogle Scholar
  3. 3.
    Kube, C.R., Zhang, H.: Task modelling in collective robotics. Auton. Robots 4, 53–72 (1997)CrossRefGoogle Scholar
  4. 4.
    Kasher, A.: Modular speech act theory: research program and results [collection on the foundations of speech act theory]. In: Tsohatzidis, S.L.T. (ed.) Foundations of Speech Act Theory Philosophical and Linguistic Perspectives, pp. 312–322. Routledge, New York (1994)Google Scholar
  5. 5.
    Hunyadi, L.: Multimodal human-computer interaction technologies. Theoretical modeling and application in speech processing. Argumentum 7 313–329 (2011)Google Scholar
  6. 6.
    Hunyadi, L., Földesi, A., Szekrényes, I., Staudt, A., Kiss, H., Abuczki, Á., Bódog, A.: Az ember-gép kommunikáció elméleti-technológiai modellje és nyelvtechnológiai vonatkozásai [a theoretical-technological model of human-machine communication and its relation to language technology]. Általános Nyelvészeti Tanulmányok XXIV (2012) 265–310Google Scholar
  7. 7.
    Hunyadi, L., Szekrényes, I., Borbély, A., Kiss, H.: Annotation of spoken syntax in relation to prosody and multimodal pragmatics. Cognitive Infocommunications (CogInfoCom), pp. 537–541 (2012)Google Scholar
  8. 8.
    Rizzolatti, G., Arbib, M.A.: Language within our grasp. Trends Neurosci. 21(5), 188–194 (1998)CrossRefGoogle Scholar
  9. 9.
    Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G.: Action recognition in the premotor cortex. Brain 119(Pt 2), 593–609 (1996)CrossRefGoogle Scholar
  10. 10.
    Willems, R.M., Hagoort, P.: Neural evidence for the interplay between language, gesture, and action: a review. Brain Lang. 101(3), 278–289 (2007)CrossRefGoogle Scholar
  11. 11.
    Willems, R.M., Ozyurek, A., Hagoort, P.: Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. J. Cogn. Neurosci. 20, 1235–1249 (2008)CrossRefGoogle Scholar
  12. 12.
    Willems, R.M., Hagoort, P.: Hand preference influences neural correlates of action observation. Brain Res. 1269, 90–104 (2009)CrossRefGoogle Scholar
  13. 13.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
  14. 14.
    Kiss, H.: A hucomtech audio adatbázis szintaktikai szintjének multimodális vizsgálata [the multimodal study of the syntactic level of the hucomtech audio database]. MSZNY 2014, pp. 27–38 (2014)Google Scholar
  15. 15.
    Alessandro, C., Mertens, P.: Prosogram: semi-automatic transcription of prosody based on a tonal perception model. In: Proceedings of the 2nd International Conference of Speech Prosody pp. 23–26 (2004)Google Scholar
  16. 16.
    Szekrényes, I.: Annotation and interpretation of prosodic data in the hucomtech corpus for multimodal user interfaces. J. Multimodal User Interfaces 8, 143–150 (2014)Google Scholar
  17. 17.
    Boersma, P.: Praat, a system for doing phonetics by computer. Glot Int. 5(9/10), 341–345 (2001)Google Scholar
  18. 18.
    Abuczki, Á.: A multimodal analysis of the sequential organization of verbal and nonverbal interaction. Argumentum 7, 261–279 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.University of DebrecenDebrecenHungary

Personalised recommendations