Skip to main content
Log in

Modeling Grounding for Interactive Social Companions

  • Technical Contribution
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

Grounding is an important process that underlies all human interaction. Hence, it is also crucial for social companions to interact naturally. Maintaining the common ground requires domain knowledge but has also numerous social aspects, such as attention, engagement and empathy. Integrating these aspects and their interplay with the dialog management in a computational interaction model is a complex task. We present a modeling approach overcoming this challenge and illustrate it based on some social companion applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://scenemaker.dfki.de/.

  2. http://www.aldebaran-robotics.com/.

  3. http://www.microsoft.com/.

  4. http://www.charamel.com/.

  5. http://www.reeti.fr/.

  6. http://www.robokindrobots.com.

References

  1. Adams RB, Kleck RE (2005) Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5(1):3–11

    Article  Google Scholar 

  2. Allwood J, Nivre J, Ahlsén E (1992) On the semantics and pragmatics of linguistic feedback. J Semant 9(1):1–26

    Article  Google Scholar 

  3. Argyle M, Cook M (1976) Gaze and mutual gaze. Cambridge University Press, Cambridge

    Google Scholar 

  4. Baron-Cohen S (1997) Mindblindness: an essay on autism and theory of mind. MIT Press, Cambridge

    Google Scholar 

  5. Bavelas J, Coates L, Johnson T (2002) Listener responses as a collaborative process: the role of gaze. Communication 52(3):566–580

    Google Scholar 

  6. Bee N, André E, Vogt T, Gebhard P (2010) Close engagements with artificial companions: key social, psychological, ethical and design issues, chap. The use of affective and attentive cues in an empathic computer-based companion. John Benjamins, Amsterdam, pp 131–142

    Book  Google Scholar 

  7. Behrooz M, Rich C, Sidner C (2014) On the Sociability of a game-playing agent: a software framework and empirical study. In: Intelligent virtual agents, IVA ’14, pp 40–53

  8. Bohus D, Horvitz E (2011) Multiparty turn taking in situated dialog: study, lessons, and directions. In: SIGDIAL ’11, pp 98–109

  9. Carpenter B (1992) The logic of typed feature structures, Cambridge University Press, New York, USA

  10. Clark HH (1996) Using language. Cambridge University Press, Cambridge

    Book  Google Scholar 

  11. Clark HH, Wilkes-Gibbs D (1986) Referring as a collaborative process. Cognition 22:1–39

    Article  Google Scholar 

  12. Damian I, Baur T, Lugrin B, Gebhard P, Mehlmann G, André, E (2015) Games are better than books: in-situ comparison of an interactive job interview game with conventional training. In: Artificial intelligence in education, AIED ’15, pp 84–94

  13. Doherty-Sneddon G, Phelps FG (2005) Gaze aversion: a response to cognitive or social difficulty? Mem Cognit 33(4):727–733

    Article  Google Scholar 

  14. Duncan S (1972) Some signals and rules for taking speaking turns in conversations. Pers Soc Psychol 23(2):283–292

    Article  Google Scholar 

  15. Gebhard P, Mehlmann G, Kipp M (2012) Visual SceneMaker—a tool for authoring interactive virtual characters. Multimodal User Interfaces 6(1–2):3–11

    Article  Google Scholar 

  16. Heerink M, Kröse B, Evers V, Wielinga B (2008) The influence of social presence on acceptance of a companion robot by older people. Phys Agents 2(2):33–40

    Google Scholar 

  17. Holroyd A, Rich C, Sidner CL, Ponsler B (2011) Generating connection events for human–robot collaboration. In: Robot and human interactive communication, RO-MAN ’11, pp 241–246

  18. Kendon A (1967) Some functions of gaze-direction in social interaction. Acta Psychol 26(1):22–63

    Article  Google Scholar 

  19. Kopp S, Krenn B, Marsella S, Marshall A, Pelachaud C, Pirker H, Thrisson K, Vilhjlmsson H (2006) Towards a common framework for multimodal generation: the behavior markup language. In: Intelligent virtual agents, IVA ’06, pp 205–217

  20. Lalanne D, Nigay L, Palanque P, Robinson P, Vanderdonckt J, Ladry JF (2009) Fusion engines for multimodal input: a survey. In: ICMI ’09, pp 153–160

  21. Lee J, Marsella S, Traum D, Gratch J, Lance B (2007) The rickel gaze model: a window on the mind of a virtual human. In: IVA ’07, pp. 296–303

  22. Mehlmann G, André E (2012) Modeling multimodal integration with event logic charts. In: Multimodal interaction, ICMI ’12, pp 125–132

  23. Mehlmann G, Janowski K, Baur T, Markus Häring EA, Gebhard P (2014) Exploring a model of gaze for grounding in HRI. In: multimodal interaction, ICMI ’14, pp 247–254

  24. Mundy P, Newell L (2007) Attention, joint attention, and social cognition. Curr Dir Psychol Sci 16(5):269–274

    Article  Google Scholar 

  25. Nielsen G (1962) Studies in self confrontation. Munksgaard, Copenhagen

    Google Scholar 

  26. Nooraei B, Rich C, Sidner C (2014) A real-time architecture for embodied conversational agents: beyond turn-taking. In: ACHI ’14, pp 381–388

  27. Oviatt S (2008) The human–computer interaction handbook, chap. Multimodal interfaces. Lawrence Erlbaum, New Jersey

    Google Scholar 

  28. Sidner CL, Lee C, Kidd CD, Lesh N, Rich C (2005) Explorations in engagement for humans and robots. Artif Intell 166(1–2):140–164

    Article  Google Scholar 

  29. Traum D, Leuski A, Roque A, Gandhe S, DeVault D, Gerten J, Robinson S, Martinovski B (2008) Natural language dialogue architectures for tactical questioning characters. In: Army science conference

  30. Yngve VH (1970) On getting a word in edgewise. In: Meeting of the Chicago Linguistic Society, pp 657–677

Download references

Acknowledgments

This work has been partially funded by the European Commission within the Seventh Framework Programme in the research project TARDIS, the European Union’s Horizon 2020 Research and Innovation Programme in the research project KRISTINA and the German Federal Ministry of Education and Research in the project EmpaT.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregor Mehlmann.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mehlmann, G., Janowski, K. & André, E. Modeling Grounding for Interactive Social Companions. Künstl Intell 30, 45–52 (2016). https://doi.org/10.1007/s13218-015-0397-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-015-0397-5

Keywords

Navigation