Abstract
Grounding is an important process that underlies all human interaction. Hence, it is also crucial for social companions to interact naturally. Maintaining the common ground requires domain knowledge but has also numerous social aspects, such as attention, engagement and empathy. Integrating these aspects and their interplay with the dialog management in a computational interaction model is a complex task. We present a modeling approach overcoming this challenge and illustrate it based on some social companion applications.
Similar content being viewed by others
References
Adams RB, Kleck RE (2005) Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5(1):3–11
Allwood J, Nivre J, Ahlsén E (1992) On the semantics and pragmatics of linguistic feedback. J Semant 9(1):1–26
Argyle M, Cook M (1976) Gaze and mutual gaze. Cambridge University Press, Cambridge
Baron-Cohen S (1997) Mindblindness: an essay on autism and theory of mind. MIT Press, Cambridge
Bavelas J, Coates L, Johnson T (2002) Listener responses as a collaborative process: the role of gaze. Communication 52(3):566–580
Bee N, André E, Vogt T, Gebhard P (2010) Close engagements with artificial companions: key social, psychological, ethical and design issues, chap. The use of affective and attentive cues in an empathic computer-based companion. John Benjamins, Amsterdam, pp 131–142
Behrooz M, Rich C, Sidner C (2014) On the Sociability of a game-playing agent: a software framework and empirical study. In: Intelligent virtual agents, IVA ’14, pp 40–53
Bohus D, Horvitz E (2011) Multiparty turn taking in situated dialog: study, lessons, and directions. In: SIGDIAL ’11, pp 98–109
Carpenter B (1992) The logic of typed feature structures, Cambridge University Press, New York, USA
Clark HH (1996) Using language. Cambridge University Press, Cambridge
Clark HH, Wilkes-Gibbs D (1986) Referring as a collaborative process. Cognition 22:1–39
Damian I, Baur T, Lugrin B, Gebhard P, Mehlmann G, André, E (2015) Games are better than books: in-situ comparison of an interactive job interview game with conventional training. In: Artificial intelligence in education, AIED ’15, pp 84–94
Doherty-Sneddon G, Phelps FG (2005) Gaze aversion: a response to cognitive or social difficulty? Mem Cognit 33(4):727–733
Duncan S (1972) Some signals and rules for taking speaking turns in conversations. Pers Soc Psychol 23(2):283–292
Gebhard P, Mehlmann G, Kipp M (2012) Visual SceneMaker—a tool for authoring interactive virtual characters. Multimodal User Interfaces 6(1–2):3–11
Heerink M, Kröse B, Evers V, Wielinga B (2008) The influence of social presence on acceptance of a companion robot by older people. Phys Agents 2(2):33–40
Holroyd A, Rich C, Sidner CL, Ponsler B (2011) Generating connection events for human–robot collaboration. In: Robot and human interactive communication, RO-MAN ’11, pp 241–246
Kendon A (1967) Some functions of gaze-direction in social interaction. Acta Psychol 26(1):22–63
Kopp S, Krenn B, Marsella S, Marshall A, Pelachaud C, Pirker H, Thrisson K, Vilhjlmsson H (2006) Towards a common framework for multimodal generation: the behavior markup language. In: Intelligent virtual agents, IVA ’06, pp 205–217
Lalanne D, Nigay L, Palanque P, Robinson P, Vanderdonckt J, Ladry JF (2009) Fusion engines for multimodal input: a survey. In: ICMI ’09, pp 153–160
Lee J, Marsella S, Traum D, Gratch J, Lance B (2007) The rickel gaze model: a window on the mind of a virtual human. In: IVA ’07, pp. 296–303
Mehlmann G, André E (2012) Modeling multimodal integration with event logic charts. In: Multimodal interaction, ICMI ’12, pp 125–132
Mehlmann G, Janowski K, Baur T, Markus Häring EA, Gebhard P (2014) Exploring a model of gaze for grounding in HRI. In: multimodal interaction, ICMI ’14, pp 247–254
Mundy P, Newell L (2007) Attention, joint attention, and social cognition. Curr Dir Psychol Sci 16(5):269–274
Nielsen G (1962) Studies in self confrontation. Munksgaard, Copenhagen
Nooraei B, Rich C, Sidner C (2014) A real-time architecture for embodied conversational agents: beyond turn-taking. In: ACHI ’14, pp 381–388
Oviatt S (2008) The human–computer interaction handbook, chap. Multimodal interfaces. Lawrence Erlbaum, New Jersey
Sidner CL, Lee C, Kidd CD, Lesh N, Rich C (2005) Explorations in engagement for humans and robots. Artif Intell 166(1–2):140–164
Traum D, Leuski A, Roque A, Gandhe S, DeVault D, Gerten J, Robinson S, Martinovski B (2008) Natural language dialogue architectures for tactical questioning characters. In: Army science conference
Yngve VH (1970) On getting a word in edgewise. In: Meeting of the Chicago Linguistic Society, pp 657–677
Acknowledgments
This work has been partially funded by the European Commission within the Seventh Framework Programme in the research project TARDIS, the European Union’s Horizon 2020 Research and Innovation Programme in the research project KRISTINA and the German Federal Ministry of Education and Research in the project EmpaT.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Mehlmann, G., Janowski, K. & André, E. Modeling Grounding for Interactive Social Companions. Künstl Intell 30, 45–52 (2016). https://doi.org/10.1007/s13218-015-0397-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13218-015-0397-5