Advertisement

Simulating Listener Gaze and Evaluating Its Effect on Human Speakers

  • Laura FrädrichEmail author
  • Fabrizio Nunnari
  • Maria Staudte
  • Alexis Heloir
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Abstract

This paper presents an agent architecture designed as part of a multidisciplinary collaboration between embodied agents development and psycho-linguistic experimentation. This collaboration will lead to an empirical study involving an interactive human-like avatar following participants’ gaze. Instead of adapting existing “off the shelf” embodied agents solutions, experimenters and developers collaboratively designed and implemented experiment’s logic and the avatar’s real time behavior from scratch in the Blender environment following an agile methodology. Frequent iterations and short implementation sprints allowed the experimenters to focus on the experiment and test many interaction scenarios in a short time.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Courgeon, M.: MultiModal affective and reactive characters. Springer Lecture Notes in Artificial Intelligence (2011)Google Scholar
  2. 2.
    Courgeon, M., Rautureau, G., Martin, J.C., Grynszpan, O.: Joint attention simulation using eye-tracking and virtual humans. IEEE Transactions on Affective Computing 5(3), 238–250 (2014)CrossRefGoogle Scholar
  3. 3.
    Heylen, D., van Es, I., Nijholt, A., van Dijk, B.: Controlling the gaze of conversational agents. In: Advances in Natural Multimodal Dialogue Systems, pp. 245–262. Springer (2005)Google Scholar
  4. 4.
    Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.: Towards a common framework for multimodal generation: the behavior markup language. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 205–217. Springer, Heidelberg (2006). doi: 10.1007/11821830_17CrossRefGoogle Scholar
  5. 5.
    Mancini, M., Pelachaud, C.: Dynamic behavior qualifiers for conversational agents. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS, vol. 4722, pp. 112–124. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-74997-4_11CrossRefGoogle Scholar
  6. 6.
    Staudte, M., Crocker, M.W.: Investigating joint attention mechanisms through spoken human-robot interaction. Cognition 120(2), 268–291 (2011)CrossRefGoogle Scholar
  7. 7.
    Staudte, M., Crocker, M.W., Heloir, A., Kipp, M.: The influence of speaker gaze on listener comprehension: Contrasting visual versus intentional accounts. Cognition 133(1), 317–328 (2014). http://linkinghub.elsevier.com/retrieve/pii/S0010027714001139CrossRefGoogle Scholar
  8. 8.
    Thiebaux, M., Marsella, S., Marshall, A.N., Kallmann, M.: Smartbody: behavior realization for embodied conversational agents. In: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 151–158 (2008)Google Scholar
  9. 9.
    Welbergen, H., Reidsma, D., Kopp, S.: An incremental multimodal realizer for behavior co-articulation and coordination. In: Nakano, Y., Neff, M., Paiva, A., Walker, M. (eds.) IVA 2012. LNCS, vol. 7502, pp. 175–188. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33197-8_18CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Laura Frädrich
    • 1
    Email author
  • Fabrizio Nunnari
    • 2
  • Maria Staudte
    • 1
  • Alexis Heloir
    • 2
    • 3
  1. 1.Embodied Spoken Interaction GroupSaarland UniversitySaarbrückenGermany
  2. 2.SLSI Group, German Research Center for Artificial IntelligenceSaarbrückenGermany
  3. 3.LAMIH, UMR CNRS 8201/Université de ValenciennesValenciennesFrance

Personalised recommendations