Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent

  • Jean-Claude Martin
  • Laurence Devillers
  • Amaryllis Raouzaiou
  • George Caridakis
  • Zsófia Ruttkay
  • Catherine Pelachaud
  • Maurizio Mancini
  • Radek Niewiadomski
  • Hannes Pirker
  • Brigitte Krenn
  • Isabella Poggi
  • Emanuela Magno Caldognetto
  • Federica Cavicchio
  • Giorgio Merola
  • Alejandra García Rojas
  • Frédéric Vexo
  • Daniel Thalmann
  • Arjan Egges
  • Nadia Magnenat-Thalmann
Part of the Cognitive Technologies book series (COGTECH)


In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations such as gesture repositories. This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent. Designers of an affective agent need to know how it should coordinate its facial expression, speech, gestures and other modalities in view of showing emotion. This synchronisation of modalities is a main feature of emotions.


Facial Expression Hand Gesture Nonverbal Behaviour Facial Action Code System Postural Behaviour 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Albrecht I, Schrder M, Haber J, Seidel H-P (2005) Mixed feelings: Expression of non-basic emotions in a muscle-based talking head.  J Virtual Reality Lang Speech Gesture 8(4 Special issue):201–212Google Scholar
  2. André E (2006) Corpus-based approaches to behavior modeling for virtual humans: a critical review. Modeling communication with robots and virtual humans. In: Workshop of the ZiF: research group 2005/2006 “Embodied communication in humans and machines”. Scientific organization: Ipke Wachsmuth (Bielefeld), Günther Knoblich (Newark)Google Scholar
  3. Andr E, Rist T, van S, Mulken, Klesen M, Baldes S (2000) The automated design of believable dialogues for animated presentation teams. In: Cassell JSJ, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, MA, pp 220–255Google Scholar
  4. Arfib D (2006) Time warping of a piano (and other) video sequences following different emotions. In: Workshop on “subsystem synchronization and multimodal behavioral organization” held during Humaine summer school. GenovaGoogle Scholar
  5. Argyle M (2004) Bodily communication, 2nd edn. Routledge, London and Taylor and Francis, New York, NYGoogle Scholar
  6. Bänziger T, Scherer K (2007) Using actor portrayals to systematically study mul-timodal emotion expression: the GEMEP corpus. In: 2nd international conference on affective computing and intelligent interaction (ACII 2007) Lisbon, Portugal, pp 476–487Google Scholar
  7. Bassili JN (1979) Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. J Pers Soc Psychol 37(11):2049–2058CrossRefGoogle Scholar
  8. Batliner A, Fisher K, Huber R, Spilker J, Noth E (2000) Desperately seeking emotions or: Actors, wizards, and human beings. ISCA Workshop on speech and emotion: a conceptual framework for research Newcastle, Northern Ireland, pp 195–200Google Scholar
  9. Buisine S, Abrilian S, Niewiadomski R, Martin J-C, Devillers L, Pelachaud C (2006) Perception of blended emotions: from video corpus to expressive agent. 6th International Conference on Intelligent Virtual Agents (IVA’2006). Best paper award. Springer, Marina del Rey, CA, pp 93–106Google Scholar
  10. Buisine S (2005) Conception et évaluation d’Agents conversationnels multimodaux bidirectionnels. PhD Thesis. Doctorat de Psychologie Cognitive – Ergonomie, Paris V. 8 avril 2005. Direction J.-C. Martin and J.-C. Sperandio. 2005. URL Accessed on 4 November 2010
  11. Buisine S, Martin JC (2007a) The effects of speech-gesture cooperation in animated agents’ behavior in multimedia presentations. Interact Comput 19:484–493Google Scholar
  12. Buisine S, Martin J-C (2007b) The influence of personality on the perception of embodied agents’ multimodal behavior. In: 3rd conference of the international society for gesture studiesGoogle Scholar
  13. Cacioppo JT, Petty RP, Losch ME, Kim HS (1986) Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. J Pers Soc Psychol 50:260–268Google Scholar
  14. Caridakis G, Raouzaiou A, Karpouzis K, Kollias S (2006) Synthesizing gesture expressivity based on real sequences. In: Workshop “multimodal corpora. From multimodal behaviour theories to usable models”. 5th international conference on language resources and evaluation (LREC’2006), Genova, Italy, pp 19–23Google Scholar
  15. Caridakis G, Raouzaiou A, Bevacqua E, Mancini M, Karpouzis K, Malatesta L, Pelachaud C (2007) Virtual agent multimodal mimicry of humans. J Lang Res Eval (Special issue) “Multimodal Corpora”. Lang Res Eval (41):367–388Google Scholar
  16. Carofiglio V, de Rosis F, Grassano R (2008) Dynamic models of multiple emotion activation. In: Canamero L, Aylett R (eds) Animating expressive characters for social interactions. John Benjamins, Amsterdam, pp 123–141Google Scholar
  17. Cassell J, Pelachaud C, Badler N, Steedman M, Achorn B, Becket T, Douville B, Prevost S, Stone M (1994) Animated conversation: rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In: ACM SIGGRAPH’94, pp 413–420Google Scholar
  18. Cassell J, Bickmore T, Billinghurst M, Campbell L, Chang K, Vilhjálmsson HH, Yan H (1999) Embodiment in conversational interfaces: Rea. CHI’99 (SIGCHI conference on Human factors in computing systems: the CHI is the limit) Pittsburgh, PA, USA, pp 520–527Google Scholar
  19. Cassell J, Bickmore T, Campbell L, Vilhjlmsson H, Yan H (2000) Human conversation as a system framework: designing embodied conversational agents. In: Cassell J, Sullivan J, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, MA, pp 29–63Google Scholar
  20. Cassell J, Vilhjálmsson H, Bickmore T (2001) BEAT: the Behavior Expression Animation Toolkit. In: 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’01) Los Angeles, CA, pp 477–486Google Scholar
  21. Cassell J, Kopp S, Tepper P, Ferriman K, Striegnitz K (2007) Trading spaces: how humans and humanoids use speech and gesture to give directions. In: Nishida T (ed) Conversational informatics. Wiley, New York, NY, pp 133–160Google Scholar
  22. Chopra-Khullar S, Badler N (2001) Where to look? Automating attending behaviours of virtual human characters. In: 4th Conference on AAMAS, pp 9–23Google Scholar
  23. Condon WS (1986) Communication: rhythm and structure. rhythm in psychological, linguistic and musical processes. Charles C Thomas PublisherGoogle Scholar
  24. Constantini E, Pianesi F, Prete M (2005) Recognizing emotions in human and synthetic faces: the role of the upper and lower parts of the face. In: Intelligent User Interfaces (IUI’05) San Diego, CA, USA, pp 20–27Google Scholar
  25. Coutaz J, Nigay L, Salber D, Blandford AE, May J, Young RMY (1995) Four easy pieces for assessing the usability of multimodal interaction. In: Interact’95 pp 115–120Google Scholar
  26. de Melo C, Paiva A (2006) A Story about Gesticulation Expression. In: 6th International Conference on Intelligent Virtual Agents (IVA’06) Marina del Rey, CA, pp 270–281Google Scholar
  27. Dehn DM, van Mulken S (2000) The impact of animated interface agents: a review of empirical research. Int J Hum Comput Stud 52:1–22Google Scholar
  28. Devillers L, Cowie R, Martin J-C, Douglas-Cowie E, Abrilian S, McRorie M (2006) Real life emotions in French and English TV video clips: an integrated annotation protocol combining continuous and discrete approaches. In: 5th in-ternational conference on Language Resources and Evaluation (LREC 2006), Genoa, ItalyGoogle Scholar
  29. Devillers L, Martin J-C, Cowie R, Douglas-Cowie E, Batliner A (2006) Workshop “Corpora for research on emotion and affect”. In: 5th international conference on language resources and evaluation (LREC’2006). Genova, ItalyGoogle Scholar
  30. Douglas-Cowie E, Campbell N, Cowie R, Roach P (2003) Emotional speech; Towards a new generation of databases. Speech Commun 40Google Scholar
  31. Douglas-Cowie E, Devillers L, Martin J-C, Cowie R, Savvidou S, Abrilian S, Cox C (2005) Multimodal databases of everyday emotion: facing up to complexity. In: 9th European Conference on Speech communication and technology (Interspeech’2005), Lisbon, Portugal, pp 813–816Google Scholar
  32. Duncan S, Fiske D (1977) Face-to-face interaction: research, methods and theory. Lawrence Erlbaum, Hillsdale, N JGoogle Scholar
  33. Duy Bui T (2004) Creating emotions and facial expressions for embodied agents. PhD Thesis. University of TwenteGoogle Scholar
  34. Ech Chafai N, Pelachaud C, Pelé D, Breton G (2006) Gesture Expressivity Modulations in an ECA Application. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 181–192Google Scholar
  35. Ekman P (1982a) Emotion in the human face. Cambridge University PressGoogle Scholar
  36. Ekman P, Friesen W (1982) Felt, false, miserable smiles. J Nonverb Behav 6:4CrossRefGoogle Scholar
  37. Ekman P (1999) Basic emotions. In: Dalgleish T, Power MJ (eds) Handbook of cognition & emotion. Wiley, New York, NY, pp 301–320Google Scholar
  38. Ekman P (2003a) Emotions revealed. Understanding faces and feelings. Weidenfeld and Nicolson, LondonGoogle Scholar
  39. Ekman P (2003b) The face revealed. Weidenfeld and Nicolson, LondonGoogle Scholar
  40. Ekman P, Friesen WV (1975) Unmasking the face. A guide to recognizing emotions from facial clues. Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
  41. Ekman P, Friesen WC, Hager JC (2002) Facial action coding system. The Manual on CD ROMGoogle Scholar
  42. Engle RA (2000) Toward a theory of multimodal communication: combining speech, gestures, diagrams and demonstrations in instructional explanations. PhD Thesis, Stanford UniversityGoogle Scholar
  43. Feldman RS, Rim B (1991) Fundamentals of nonverbal behavior. Studies in emotion and social interaction. Cambridge University Press, CambridgeGoogle Scholar
  44. Gallaher P (1992) Individual differences in nonverbal behavior: Dimensions of style. J Pers Soc Psychol 63:133–145Google Scholar
  45. Garcia-Rojas A, Vexo F, Thalmann D (2007) Semantic representation of individualized reaction movements for virtual humans. J Virtual Reality 6(1):25–32Google Scholar
  46. Gouta K, Miyamoto M (2000) Emotion recognition, facial components associated with various emotions. Shinrigaku Kenkyu 71(3):211–218Google Scholar
  47. Harrigan JA, Rosenthal R, Scherer K (2005) The new handbook of methods in nonverbal behavior research. Series in Affective Science. Oxford University Press, OxfordGoogle Scholar
  48. Hartmann B, Mancini M, Pelachaud C (2002) Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. In: Computer animation (CA’2002) Geneva, Switzerland, pp 111–119Google Scholar
  49. Hartmann B, Mancini M, Pelachaud C (2005) Implementing expressive gesture synthesis for embodied conversational agents. In: Gesture Workshop (GW’2005), Vannes, FranceGoogle Scholar
  50. Hill R, Han C, van Lent M (2002) Perceptually driven cognitive mapping of urban environments. In: First international joint conference on autonomous agents and multiagent systems, Bologna, ItalyGoogle Scholar
  51. Isbister K, Nass C (2000) Consistency of personality in interactive characters: verbal cues, non-verbal cues, and user characteristics. Int J Hum Comput Stud 53:251–267CrossRefGoogle Scholar
  52. Jacques PA, Vicari RM, Pesty S, Bonneville J-F (2004) Applying affective tactics for a better learning. In: 16th European Conference on Artificial Intelligence (ECAI 2004). Valncia, Spain, IOS, Amsterdam, pp 109–113Google Scholar
  53. Karunaratne S, Yan H (2006) Modelling and combining emotions, visual speech and gestures in virtual head models. Signal Process Image Commun 21:429–449Google Scholar
  54. Kendon A (2004) Gesture : visible action as utterance. Cambridge University Press, CambridgeGoogle Scholar
  55. Kipp M (2004) Gesture generation by imitation. From human behavior to computer character animation. Boca Raton, FloridaGoogle Scholar
  56. Kipp M (2006) Creativity meets automation: combining nonverbal action authoring with rules and machine learning. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 230–242Google Scholar
  57. Kleinsmith A, Bianchi-Berthouze N (2007) Recognizing affective dimensions from body posture. In: 2nd international conference on affective computing and intelligent interaction (ACII 2007) Lisbon, Portugal, pp 48–58Google Scholar
  58. Knapp ML, Hall JA (2006) Nonverbal communication in human interaction, 16th edition. Thomson and Wadsworth, Belmont, CAGoogle Scholar
  59. Kopp S, Jung B, Lessmann N, Wachsmuth I (2003) Max - A multimodal assistant in virtual reality construction. KI-Künstliche Intelligenz. Vol. 4/03, pp 11–17Google Scholar
  60. Lee J, Marsella S (2006) Nonverbal behavior generator for embodied conversational agents. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 243–255Google Scholar
  61. Lee B, Kao E, Soo V (2006) Feeling ambivalent: a model of mixed emotions for virtual agents. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 329–342Google Scholar
  62. Lester J, Converse S, Kahler S, Barlow T, Stone B, Bhogal R (1997) The Persona effect: affective impact of animated pedagogical Agents. In: CHI ’97 Atlanta, pp 359–366Google Scholar
  63. Lester JC, Towns SG, Callaway CB, Voerman JL, P, F (2000) Deictic and emotive communication in animated pedagogical agents. Embodied conversational agents. The MIT Press, Cambridge, MAGoogle Scholar
  64. Malatesta L, Raouzaiou A, Karpouzis K, Kollias S (2007) MPEG-4 facial expression synthesis. J Pers Ubiquitous Comput ‘Emerg Multimodal Interfaces’ (Special issue) following the special session of the AIAI 2006 Conference. Springer, 13(1):77–83Google Scholar
  65. Martin JC, Béroule D (1993) Types et buts de coopérations entre modalités. Cinquièmes Journées sur l’Ingénierie des Interfaces Homme-Machine Lyon, France, pp 17–22Google Scholar
  66. Martin JC, Grimard S, Alexandri K (2001) On the annotation of the multimodal behavior and computation of cooperation between modalities. In: Workshop on “Representing, Annotating, and Evaluating Non-Verbal and Verbal Communicative Acts to Achieve Contextual Embodied Agents” in conjunction with the 5th International Conference on Autonomous Agents (AAMAS’2001) Montreal, Canada, pp 1–7Google Scholar
  67. Martin J-C, den Os E, Kuhnlein P, Boves L, Paggio P, Catizone R (2004) Workshop “Multimodal corpora: models of human behaviour for the specification and evaluation of multimodal input and output interfaces”. In: Association with the 4th international conference on language resources and evaluation LREC2004 URL Accessed on 4 November 2010. Centro Cultural de Belem, LISBON, Portugal
  68. Martin J-C, Abrilian S, Devillers L (2005) Annotating multimodal behaviors occurring during non basic emotions. In: 1st International Conference on affective computing and intelligent interaction (ACII’2005), Beijing, China, pp 550–557Google Scholar
  69. Martin J-C, Kuhnlein P, Paggio P, Stiefelhagen R, Pianesi F (2006) Workshop “Multimodal Corpora: from Multimodal Behaviour Theories to Usable Models”. In: Association with the 5th international conference on language resources and evaluation (LREC2006), Genoa, ItalyGoogle Scholar
  70. Martin J-C, Niewiadomski R, Devillers L, Buisine S, Pelachaud C (2006) Multimodal complex emotions: gesture expressivity and blended facial expressions. J Humanoid Rob (Special issue). In: Pelachaud C, Canamero L (eds). Achieving human-like qualities in interactive virtual and physical humanoids. 3(3):269–291Google Scholar
  71. Maybury M, Martin J-C (2002) Workshop on “Multimodal Resources and Multi-modal Systems Evaluation”. In: Conference on language resources and evaluation (LREC’2002), Las Palmas, Canary Islands, SpainGoogle Scholar
  72. Moreno R, Mayer RE, Spires HA, Lester JC (2001) The case for social agency in computer-based teaching: do students learn more deeply when they interact with animated pedagogical agents? Cognition Instr 19:177–213Google Scholar
  73. Neff M, Fiume E (2005) Methods for exploring expressive stance. Graph Models. Special issue on SCA 2004. 68(2):133–157Google Scholar
  74. Niewiadomski RA (2009) model of complex facial expressions in interpersonal relations for animated agents. PhD Thesis. PhD. dissertation, University of PerugiaGoogle Scholar
  75. Niewiadomski R, Pelachaud C (2007) Fuzzy similarity of facial expressions of embodied agents. In: 7th international conference on intelligent virtual agents (IVA’2007) Paris, France, pp 86–98Google Scholar
  76. Noma T, Zhao L, Badler N (2000) Design of a Virtual Human Presenter. IEEE J Comput Graph Appl 20(4):79–85CrossRefGoogle Scholar
  77. Ochs M, Niewiadomski R, Pelachaud C, Sadek D (2005) Intelligent expressions of emotions. In: 1st International Conference on Affective Computing and Intelligent Interaction (ACII’2005), Springer-Verlag, Beijing, China, pp 707–714Google Scholar
  78. Pandzic IS, Forchheimer R (2002) MPEG-4 facial animation. The standard, implementation and applications. Wiley and Sons, LTDGoogle Scholar
  79. Pelachaud C (2005) Multimodal expressive embodied conversational agent. ACM Multimedia, Brave New Topics session, Singapore, 6–11 November, ACMGoogle Scholar
  80. Pelachaud C, Braffort A, Breton G, Ech Chadai N, Gibet S, Martin J-C, Maubert S, Ochs M, Pelé D, Perrin A, Raynal M, Réveret L, Sadek D (2004) AGENTS CONVERSATIONELS : Systmes d’animation Modlisation des comportements multimodaux applications : agents pdagogiques et agents signeurs. Action Spcifique du CNRS Humain VirtuelGoogle Scholar
  81. Poggi I (1996) Mind markers. In: 5th International pragmatics conference, Mexico CityGoogle Scholar
  82. Poggi I (2003) Mind markers. Gestures. Meaning and use. University Fernando Pessoa PressGoogle Scholar
  83. Poggi I (2006) Social influence through face, hands, and body. In: Second Nordic Conference on Multimodality Goteborg, Sweden, pp 5–29Google Scholar
  84. Poggi I, Pelachaud C, de Rosis F, Carofiglio V, De Carolis B (2005) GRETA. A Believable Embodied Conversational Agent. In: Stock O, Zancarano M (eds) Multimodal intelligent information presentation. Kluwer, Dordrecht, pp 3–26CrossRefGoogle Scholar
  85. Prendinger H, Ishizuka M (2004) Life-like characters. Tools, affective functions and applications. Springer, BerlinGoogle Scholar
  86. Rehm M, André E (2005) Catch Me If You Can – Exploring Lying Agents in Social Settings. In: International Conference on Autonomous agents and multiagent systems (AAMAS’2005) Utrecht, the Netherlands, pp 937–944Google Scholar
  87. Richmond VP, Croskey JC (1999) Non Verbal Behavior in Interpersonal relations. Allyn and BaconGoogle Scholar
  88. Ruttkay Z, Noot H, ten Hagen P (2003) Emotion Disc and Emotion Squares: tools to explore the facial expression face. Comput Graph Forum 22(1):49–53CrossRefGoogle Scholar
  89. Scherer KR (1984) Les fonctions des signes non verbaux dans la communication. In: Cosnier J, Brossard A (eds) La communication non verbale. Delachaux & Niestl, Paris, pp 71–100Google Scholar
  90. Scherer KR (1998) Analyzing Emotion Blends. In: Proceedings of the 10th conference of the international society for research on emotions Wrzburg, Germany, pp 142–148Google Scholar
  91. Scherer KR (2000) Emotion. Introduction to social psychology: a European perspective. Blackwell, OxfordGoogle Scholar
  92. Scherer KR, Ellgring H (2007) Multimodal expression of emotion: affect programs or componential appraisal patterns? Emotion 7:1Google Scholar
  93. Siegman AW, Feldstein S (1985) Multichannel integrations of nonverbal behavior. LEA–Routledge, New York, NYGoogle Scholar
  94. Tepper P, Kopp S, Cassell J (2004) Content in context: generating language and iconic gesture without a gestionary. Workshop on balanced perception and action in ECAs at automous agents and multiagent systems (AAMAS), New York, NY, USAGoogle Scholar
  95. Volpe G (2005) Special issue on expressive gesture in performing arts and new media. J New Music Rese Taylor and Francis 34:1CrossRefGoogle Scholar
  96. Wallbott HG (1998) Bodily expression of emotion. Eur J Soc Psychol 28:879–896Google Scholar
  97. Wallbott HG, Scherer KR (1986) Cues and channels in emotion recognition. J Pers Soc Psychol 51(4):690–699CrossRefGoogle Scholar
  98. Wegener Knudsen M, Martin J-C, Dybkjr L, Berman S, Bernsen NO, Choukri K, Heid U, Kita S, Mapelli V, Pelachaud C, Poggi I, van Elswijk G, Wittenburg P (2002a) Survey of NIMM data resources, current and future user profiles, markets and user needs for NIMm resources. ISLE natural interactivity and multimodality. Working Group Deliverable D8.1Google Scholar
  99. Wegener Knudsen M, Martin J-C, Dybkjr L, Machuca Ayuso M-J, Bernsen NO, Carletta J, Heid U, Kita S, Llisterri J, Pelachaud C, Poggi I, Reithinger N, van Elswijk G, Wittenburg P (2002b) Survey of multimodal annotation schemes and best practice. ISLE natural interactivity and multimodality. Working Group Deliverable D9.1Google Scholar
  100. Wiggers M (1982) Jugments of facial expressions of emotion predicted from facial behavior. J Nonverb Behav 7(2):101–116CrossRefGoogle Scholar
  101. Wonisch D, Cooper G (2002) Interface agents: preferred appearance characteristics based upon context. In: Virtual conversational characters: applications, methods, and research challenges in conjunction with HF2002 and OZCHI2002 Melbourne, AustraliaGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Jean-Claude Martin
    • 1
  • Laurence Devillers
    • 2
  • Amaryllis Raouzaiou
    • 3
  • George Caridakis
    • 4
  • Zsófia Ruttkay
    • 5
  • Catherine Pelachaud
    • 6
  • Maurizio Mancini
    • 7
  • Radek Niewiadomski
    • 8
  • Hannes Pirker
    • 9
  • Brigitte Krenn
    • 9
  • Isabella Poggi
    • 10
  • Emanuela Magno Caldognetto
    • 11
  • Federica Cavicchio
    • 12
  • Giorgio Merola
    • 13
  • Alejandra García Rojas
    • 14
  • Frédéric Vexo
    • 14
  • Daniel Thalmann
    • 14
  • Arjan Egges
    • 15
  • Nadia Magnenat-Thalmann
    • 16
  1. 1.Computer Sciences Laboratory for Mechanics and Engineering Sciences (LIMSI)ParisFrance
  2. 2.LIMSI-CNRSOrsayFrance
  3. 3.National Technical University of AthensAthensGreece
  4. 4.Image, Video and Multimedia Systems LabNational Technical University of AthensAthensGreece
  5. 5.University of TwenteEnschedeThe Netherlands
  6. 6.CNRS-LTCI, TELECOM ParisTechParisFrance
  7. 7.InfoMus LabUniversitá di GenovaGenoaItaly
  8. 8.Telecom ParisTechParisFrance
  9. 9.Austrian Research Institute for Artificial IntelligenceViennaAustria
  10. 10.University of Rome 3RomeItaly
  11. 11.Institute of Cognitive Sciences and TechnologiesRomeItaly
  12. 12.CIMeC Università di TrentoTrentoItaly
  13. 13.Università Roma TreRomeItaly
  14. 14.Ecole Polytechnique Fédérale de LausanneLausanneSwitzerland
  15. 15.Universiteit UtrechtUtrechtThe Netherlands
  16. 16.University of GenevaGenevaSwitzerland

Personalised recommendations