Animated Faces, Abstractions and Autism

  • Diana Arellano
  • Volker Helzle
  • Ulrich Max Schaller
  • Reinhold Rauh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8637)

Abstract

The Agent Framework is a real-time development platform designed for the rapid prototyping of graphical and agent-centric applications. Previous use cases show the potential of the Agent Framework, which is currently used in a project that combines facial animation, non-photorealistic rendering and their application in autism research.

Keywords

affective characters facial animation real-time healthcare 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kasap, Z., Ben Moussa, M., Chaudhuri, P., Magnenat-Thalmann, N.: Making Them Remember - Emotional Virtual Characters with Memory. IEEE Computer Graphics and Applications 29(2), 20–29 (2009)CrossRefGoogle Scholar
  2. 2.
    Poggi, I., Pelachaud, C., de Rosis, F., Carofiglio, V., De Carolis, B.: GRETA. A Believable Embodied Conversational Agent. Multimodal Intelligent Information Presentation 27, 3–25 (2005)CrossRefGoogle Scholar
  3. 3.
    Hoque, M., Courgeon, M., Martin, J.C., Mutlu, B., Picard, R.W.: MACH: My Automated Conversation coacH. In: UBICOMP 2013 (2013)Google Scholar
  4. 4.
    Bee, N., Falk, B., André, E.: Simplified Facial Animation Control Utilizing Novel Input Devices: A Comparative Study. In: IUI 2009, pp. 197–206 (2009)Google Scholar
  5. 5.
    Shapiro, A.: Building a Character Animation System. In: Allbeck, J.M., Faloutsos, P. (eds.) MIG 2011. LNCS, vol. 7060, pp. 98–109. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  6. 6.
    Heloir, A., Kipp, M.: EMBR – A Realtime Animation Engine for Interactive Embodied Agents. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS (LNAI), vol. 5773, pp. 393–404. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  7. 7.
    Hartholt, A., Traum, D., Marsella, S.C., Shapiro, A., Stratou, G., Leuski, A., Morency, L.-P., Gratch, J.: All Together Now: Introducing the Virtual Human Toolkit. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS (LNAI), vol. 8108, pp. 368–381. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  8. 8.
    Jung, Y., Kuijper, A., Kipp, M., Miksatko, J., Gratch, J., Thalmann, D.: Believable Virtual Characters in Human-Computer Dialogs. In: EUROGRAPHICS, pp. 75–100 (2011)Google Scholar
  9. 9.
  10. 10.
    Helzle, V., Biehn, C., Schlömer, T., Linner, F.: Adaptable Setup for Performance Driven Facial Animation. In: ACM SIGGRAPH 2004 Sketches, p. 54 (2004)Google Scholar
  11. 11.
    Ekman, P., Friesen, W.V., Hager, J.C.: The Facial Action Coding System. Weidenfeld & Nicolson, London (2002)Google Scholar
  12. 12.
  13. 13.
  14. 14.
    Arellano, D., Helzle, V.: The muses of poetry. In: CHI EA 2014, pp. 383–386 (2014)Google Scholar
  15. 15.
    Helzle, V., Spielmann, S., Zweiling, N.: Emote, a new way of creating animated messages for web enabled devices. In: CVMP 2011 (2011)Google Scholar
  16. 16.
    Rauh, R., Schaller, U.M.: Categorical Perception of Emotional Facial Expressions in Video Clips with Natural and Artificial Actors: A Pilot Study. Technical Report ALU-KJPP-2009-001, University of Freiburg (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Diana Arellano
    • 1
  • Volker Helzle
    • 1
  • Ulrich Max Schaller
    • 2
  • Reinhold Rauh
    • 2
  1. 1.Filmakademie Baden-WuerttembergGermany
  2. 2.University Medical Center FreiburgGermany

Personalised recommendations