Abstract
This study deals with non-verbal behaviour in a video-recorded and manually annotated corpus of first encounters in Danish. It presents an analysis of head movements and facial expressions in the data, in particular their use to express feedback, and it discusses the results in the light of aspects of Danish culture that seem to privilege rather unconventional and non-emotional behaviour. The data provided can form the basis of multi-cultural studies where parallels are drawn to similar interactions in other languages.
Chapter PDF
Similar content being viewed by others
Keywords
References
Allwood, J., Cerrato, L., Jokinen, K., Navarretta, C., Paggio, P.: The MUMIN Coding Scheme for the Annotation of Feedback, Turn Management and Sequencing. In: Martin, J.C., et al. (eds.) Multimodal Corpora for Modelling Human Multimodal Behaviour. Special issue of the International Journal of Language Resources and Evaluation, Springer, Heidelberg (2007)
Allwood, J., Lu, J.: Chinese and Swedish multimodal communicative feedback. In: Abstracts of the 5th Conference on Multimodality, Sydney, December 1-3, pp. 19–20 (2010)
Bevacqua, E., Heylen, D., Tellier, M., Pelachaud, C.: Facial feedback signals for ECAs. In: AISB, Annual Convention Workshop “Mindful Environments”, Newcastle upon Tyne, UK, pp. 147–153 (2007)
Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In: Proceedings of SIGGRAPH, Orlando, Florida (1994)
Cerrato, L.: Investigating Communicative Feedback Phenomena across Languages and Modalities. PhD Thesis in Speech and Music Communication, Stockholm, KTH (2007)
Edlund, J., Nordstrand, M.: Turn-taking gestures and hour-glasses in a multi-modal dialogue system. In: Proc of ISCA Workshop Multi-Modal Dialogue in Mobile Environments, Kloster Irsee, Germany (2002)
Duncan, S.: Duncan, Starkey. Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology 23(2), 283–292 (1972)
Duncan Jr., S., Fiske, D.W.: Face-to-Face Interaction: Research, Methods and Theory. Lawrence Erlbaum Associates Publishers, Mahwah (1977); Distributed by John Wiley and Sons
Hadar, U., Steiner, T.J., Grant, E.C., Clifford Rose, F.: The timing of shifts of head postures during conservation. Human Movement Science 3(3), 237–245 (1984)
Hadar, U., Steiner, T.J., Clifford Rose, F.: Head movement during listening turns in conversation. Journal of Nonverbal Behavior 9(4), 214–228 (1985)
Hofstede, G.: Culture’s Consequences: Comparing Values, Behaviors, Institutions and Organizations across Nations. Sage Publications, Thousands Oaks (2001)
Jokinen, K., Navarretta, C., Paggio, P.: Distinguishing the communicative functions of gestures. In: Proceedings of the 5th Joint Workshop on Machine Learning and Multimodal Interaction, Utrecht, The Netherlands, September 8–10, pp. 8–10 (2008)
Jokinen, K., Harada, K., Nishida, M., Yamamoto, S.: Turn-alignment using eye-gaze and speech in conversational interaction. In: Proceedings of Interspeech 2010, Makuhari, Japan (2010)
Jokinen, K., Nishida, M., Yamamoto, S.: Collecting and Annotating Conversational Eye-Gaze Data. In: Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, Proceedings of the Language Resources and Evaluation Conference (LREC 2010), Malta (2010)
Kendon, A.: Gesture: Visible action as utterance, Cambridge (2004)
Kipp, M.: Anvil – A Generic Annotation Tool for Multimodal Dialogue. In: Proceedings of Eurospeech 2001, pp. 1367–1370 (2001)
Maynard, S.K.: Interactional functions of a nonverbal sign Head movement in japanese dyadic casual conversation. Journal of Pragmatics 11(5), 589–606 (1987)
McClave, E.Z.: Linguistic functions of head movements in the context of speech. Journal of Pragmatics 32(7), 855–878 (2000)
Paggio, P., Allwood, J., Ahlsen, E., Jokinen, K., Navarretta, C.: The NOMCO multimodal Nordic resource - goals and characteristics. In: Proceedings of the Language Resources and Evaluation Conference (LREC 2010), Malta (2010)
Pelachaud, C., Poggi, I.: Multimodal Embodied Agents. The Knowledge Engineering Review 17(2), 181–196 (2002)
Rehm, M., Nakano, Y., André, E., Nishida, T.: Culture-Specific First Meeting Encounters between Virtual Agents. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 223–236. Springer, Heidelberg (2008)
Rehm, M., André, E., Bee, N., Endrass, B., Wissner, M., Nakano, Y., Akhter Lipi, A., Nishida, T., Huang, H.-H.: Creating Standardized Video Recordings of Multimodal Interactions across Cultures. In: Kipp, M., Martin, J.-C., Paggio, P., Heylen, D. (eds.) Multimodal Corpora. LNCS, vol. 5509, pp. 138–159. Springer, Heidelberg (2009)
Yngve, V.: On getting a word in edgewise. Papers from the Sixth Regional Meeting of the Chicago Linguistic Society, p. 568 (1970)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Paggio, P., Navarretta, C. (2011). Head Movements, Facial Expressions and Feedback in Danish First Encounters Interactions: A Culture-Specific Analysis. In: Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Users Diversity. UAHCI 2011. Lecture Notes in Computer Science, vol 6766. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21663-3_63
Download citation
DOI: https://doi.org/10.1007/978-3-642-21663-3_63
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21662-6
Online ISBN: 978-3-642-21663-3
eBook Packages: Computer ScienceComputer Science (R0)