Advertisement

Social Signal Processing: The Research Agenda

  • Maja PanticEmail author
  • Roderick Cowie
  • Francesca D’Errico
  • Dirk Heylen
  • Marc Mehu
  • Catherine Pelachaud
  • Isabella Poggi
  • Marc Schroeder
  • Alessandro Vinciarelli

Abstract

The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow, ignoring a crucial range of abilities that matter immensely for how people do in life. This range of abilities is called social intelligence and includes the ability to express and recognise social signals produced during social interactions like agreement, politeness, empathy, friendliness, conflict, etc., coupled with the ability to manage them in order to get along well with others while winning their cooperation. Social Signal Processing (SSP) is the new research domain that aims at understanding and modelling social interactions (human-science goals), and at providing computers with similar abilities in human–computer interaction scenarios (technological goals). SSP is in its infancy, and the journey towards artificial social intelligence and socially aware computing is still long. This research agenda is twofold, a discussion about how the field is understood by people who are currently active in it and a discussion about issues that the researchers in this formative field face.

Notes

Acknowledgements

This work has been funded in part by the European Community’s 7th Framework Programme [FP7/20072013] under grant agreement no. 231287 (SSPNet).

References

  1. 1.
    Albrecht, K.: Social Intelligence: The New Science of Success. Wiley, New York (2005) Google Scholar
  2. 2.
    Allwood, J.: Cooperation and flexibility in multimodal communication. In: Bunt, H., Beun, R. (eds.) Cooperative Multimodal Communication. Lecture Notes in Computer Science, vol. 2155, pp. 113–124. Springer, Berlin (2001) Google Scholar
  3. 3.
    Ambady, N., Rosenthal, R.: Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychol. Bull. 111(2), 256–274 (1992) Google Scholar
  4. 4.
    Bänziger, T., Scherer, K.: Using actor portrayals to systematically study multimodal emotion expression: The GEMEP corpus. In: Paiva, A., Prada, R., Picard, R. (eds.) Affective Computing and Intelligent Interaction. Lecture Notes in Computer Science, vol. 4738, pp. 476–487. Springer, Berlin (2007) Google Scholar
  5. 5.
    Bates, J.: The role of emotion in believable agents. Commun. ACM 37(7), 122–125 (1994) Google Scholar
  6. 6.
    Beer, C.G.: What is a display? Am. Zool. 17(1), 155–165 (1977) MathSciNetGoogle Scholar
  7. 7.
    Berscheid, E., Reis, H.T.: Attraction and close relationships. In: Lindzey, G., Gilbert, D.T., Fiske, S.T. (eds.), The Handbook of Social Psychology, pp. 193–281. McGraw-Hill, New York (1997) Google Scholar
  8. 8.
    Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human–computer relationships. ACM Trans. Comput.–Hum. Interact. 12(2), 293–327 (2005) Google Scholar
  9. 9.
    Biddle, B.J.: Recent developments in role theory. Annu. Rev. Sociol. 12, 67–92 (1986) Google Scholar
  10. 10.
    Bigot, B., Ferrane, I., Pinquier, J., Andre-Obrecht, R.: Detecting individual role using features extracted from speaker diarization results. Multimedia Tools Appl. 1–23 (2011) Google Scholar
  11. 11.
    Bonaiuto, J., Thórisson, K.R.: Towards a neurocognitive model of realtime turntaking in face-to-face dialogue. In: Knoblich, G., Wachsmuth, I., Lenzen, M. (eds.), Embodied Communication in Humans and Machines. Oxford University Press, London (2008) Google Scholar
  12. 12.
    Bousmalis, K., Mehu, M., Pantic, M.: Spotting agreement and disagreement: A survey of nonverbal audiovisual cues and tools. In: Proceedings of the International Conference on Affective Computing and Intelligent Interfaces Workshops, vol. 2 (2009) Google Scholar
  13. 13.
    Bousmalis, K., Mehu, M., Pantic, M.: Agreement and disagreement: A survey of nonverbal audiovisual cues and tools. Image Vis. Comput. J. (2012) Google Scholar
  14. 14.
    Bousmalis, K., Morency, L., Pantic, M.: Modeling hidden dynamics of multimodal cues for spontaneous agreement and disagreement recognition. In: IEEE International Conference on Automatic Face and Gesture Recognition (2011) Google Scholar
  15. 15.
    Brunet, P.M., Charfuelan, M., Cowie, R., Schroeder, M., Donnan, H., Douglas-Cowie, E.: Detecting politeness and efficiency in a cooperative social interaction. In: International Conference on Spoken Language Processing (Interspeech), pp. 2542–2545 (2010) Google Scholar
  16. 16.
    Brunswik, E.: Perception and the Representative Design of Psychological Experiments. University of California Press, Berkeley (1956) Google Scholar
  17. 17.
    Buchanan, M.: The science of subtle signals. Strateg. Bus. 48, 68–77 (2007) Google Scholar
  18. 18.
    Burgoon, J.K., Le Poire, B.A.: Nonverbal cues and interpersonal judgments: Participant and observer perceptions of intimacy, dominance, composure, and formality. Commun. Monogr. 66(2), 105–124 (1999) Google Scholar
  19. 19.
    Byrne, D.: The Attraction Paradigm. Academic Press, New York (1971) Google Scholar
  20. 20.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E.: Embodied Conversational Agents. MIT Press, Cambridge (2000) Google Scholar
  21. 21.
    Cassell, J., Vilhjálmsson, H.H., Bickmore, T.W.: BEAT: The behavior expression animation toolkit. In: ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’01), pp. 477–486 (2001) Google Scholar
  22. 22.
    Castelfranchi, C.: Social power: A missed point in DAI, MA and HCI. In: Demazeau, Y., Mueller, J.P. (eds.) Decentralized AI, pp. 49–62. North-Holland, Elsevier (1990) Google Scholar
  23. 23.
    Cavazza, M., de la Camara, R.S., Turunen, M.: How was your day?: A companion ECA. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 – Volume 1. AAMAS ’10, pp. 1629–1630. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2010) Google Scholar
  24. 24.
    Cohen, P., Levesque, H.: Performatives in a rationally based speech act theory. In: Annual Meeting of the Association of Computational Linguistics, Pittsburgh, pp. 79–88 (1990) Google Scholar
  25. 25.
    Cohn, J., Schmidt, K.: The timing of facial motion in posed and spontaneous smiles. Int. J. Wavelets Multiresolut. Inf. Process. 2(2), 121–132 (2004) Google Scholar
  26. 26.
    Courgeon, M., Buisine, S., Martin, J.-C.: Impact of expressive wrinkles on perception of a virtual character’s facial expressions of emotions. In: Proceedings of the 9th International Conference on Intelligent Virtual Agents. IVA ’09, pp. 201–214. Springer, Berlin (2009) Google Scholar
  27. 27.
    de Gelder, B., Vroomen, J.: The perception of emotions by ear and by eye. Cogn. Emot. 14(3), 289–311 (2000) Google Scholar
  28. 28.
    de Jong, M., Theune, M., Hofs, D.H.W.: Politeness and alignment in dialogues with a virtual guide. In: International Conference on Autonomous Agents and Multiagent Systems, pp. 207–214 (2008) Google Scholar
  29. 29.
    de Melo, C., Gratch, J.: Expression of emotions using wrinkles, blushing, sweating and tears. In: International Conference on Intelligent Virtual Agents (2009) Google Scholar
  30. 30.
    Douglas-Cowie, E., Devillers, L., Martin, J.C., Cowie, R., Savvidou, S., Abrilian, S., Cox, C.: Multimodal databases of everyday emotion: Facing up to complexity. In: International Conference on Spoken Language Processing (Interspeech), pp. 813–816 (2005) Google Scholar
  31. 31.
    Duncan, S.: Some signals and rules for taking speaking turns in conversations. J. Pers. Soc. Psychol. 23(2), 283–292 (1972) Google Scholar
  32. 32.
    Eagle, N., Pentland, A.: Reality mining: sensing complex social signals. J. Pers. Ubiquitous Comput. 10(4), 255–268 (2006) Google Scholar
  33. 33.
    Efron, D.: Gesture and Environment. King’s Crown Press, New York (1941) Google Scholar
  34. 34.
    Eibl-Eibesfeldt, I.: Human Ethology. Aldine De Gruyter, New York (1989) Google Scholar
  35. 35.
    Ekman, P.: Are there basic emotions? Psychol. Rev. 99(3), 550–553 (1992) Google Scholar
  36. 36.
    Ekman, P.: Should we call it expression or communication? Innov. Soc. Sci. Res. 10(4), 333–344 (1997) MathSciNetGoogle Scholar
  37. 37.
    Ekman, P., Friesen, W.: The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica 1(1), 49–98 (1969) Google Scholar
  38. 38.
    Enquist, M.: Communication during aggressive interactions with particular reference to variation in choice of behaviour. Anim. Behav. 33(4), 1152–1161 (1985) Google Scholar
  39. 39.
    Eyben, F., Wollmer, M., Valstar, M.F., Gunes, H., Schuller, B., Pantic, M.: String-based audiovisual fusion of behavioural events for the assessment of dimensional affect. In: IEEE International Conference on Automatic Face and Gesture Recognition (FG’11) (2011) Google Scholar
  40. 40.
    Festinger, L., Schachter, S., Back, K.: Social Pressures in Informal Groups: A Study of Human Factors in Housing. Stanford University Press, Palo Alto (1950) Google Scholar
  41. 41.
    Fishbein, M., Ajzen, I.: Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Addison-Wesley, Reading (1975) Google Scholar
  42. 42.
    Foster, M.E.: Comparing rule-based and data-driven selection of facial displays. In: Proceedings of the Workshop on Embodied Language Processing, pp. 1–8 (2007) Google Scholar
  43. 43.
    Furnas, G.W., Landauer, T.K., Gomez, L.M., Dumais, S.T.: The vocabulary problem in human-system communication. Commun. ACM 30(11), 964–971 (1987) Google Scholar
  44. 44.
    Gatica-Perez, D.: Automatic nonverbal analysis of social interaction in small groups: a review. Image Vis. Comput. 27(12), 1775–1787 (2009) Google Scholar
  45. 45.
    Gladwell, M.: Blink: The Power of Thinking Without Thinking. Little, Brown and Co., New York (2005) Google Scholar
  46. 46.
    Gratch, J., Wang, N., Gerten, J., Fast, E., Duffy, R.: Creating rapport with virtual agents. In: International Conference on Intelligent Virtual Agents, pp. 125–138 (2007) Google Scholar
  47. 47.
    Grice, H.P.: Meaning. Philosoph. Rev. 66, 377–388 (1957) Google Scholar
  48. 48.
    Guilford, T., Dawkins, M.S.: What are conventional signals? Anim. Behav. 49, 1689–1695 (1995) Google Scholar
  49. 49.
    Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. J. Synthet. Emot. 1(1), 68–99 (2010) Google Scholar
  50. 50.
    Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: International Conference on Intelligent Virtual Agents (2010) Google Scholar
  51. 51.
    Gunes, H., Piccardi, M.: Assessing facial beauty through proportion analysis by image processing and supervised learning. Int. J. Human–Comput. Stud. 64, 1184–1199 (2006) Google Scholar
  52. 52.
    Hadar, U., Steiner, T., Rose, F.C.: Head movement during listening turns in conversation. J. Nonverbal Behav. 9(4), 214–228 (1985) Google Scholar
  53. 53.
    Hall, E.T.: The Silent Language. Doubleday, New York (1959) Google Scholar
  54. 54.
    Hasson, O.: Cheating signals. J. Theor. Biol. 167, 223–238 (1994) Google Scholar
  55. 55.
    Heylen, D.: Challenges ahead: Head movements and other social acts in conversations. In: International Conference on Intelligent Virtual Agents (2005) Google Scholar
  56. 56.
    Heylen, D., Bevacqua, E., Pelachaud, C., Poggi, I., Gratch, J.: Generating Listener Behaviour. Springer, Berlin (2011) Google Scholar
  57. 57.
    Hinde, R.: The concept of function. In: Baerends, G., Manning, A. (eds.), Function and Evolution in Behaviour, pp. 3–15. Clarendon Press, Oxford (1975) Google Scholar
  58. 58.
    Homans, G.C.: Social Behavior: Its Elementary Forms. Harcourt Brace, Orlando (1961) Google Scholar
  59. 59.
    Hung, H., Gatica-Perez, D.: Estimating cohesion in small groups using audio-visual nonverbal behavior. IEEE Trans. Multimedia, Special Issue on Multimodal Affective Interaction 12(6), 563–575 (2010) Google Scholar
  60. 60.
    Hung, H., Jayagopi, D., Yeo, C., Friedland, G., Ba, S., Odobez, J.M., Ramchandran, K., Mirghafori, N., Gatica-Perez, D.: Using audio and video features to classify the most dominant person in a group meeting. In: International Conference Multimedia (2007) Google Scholar
  61. 61.
    Hyman, S.E.: A new image for fear and emotion. Nature 393, 417–418 (1998) Google Scholar
  62. 62.
    Isbister, K., Nass, C.: Consistency of personality in interactive characters: Verbal cues, non-verbal cues, and user characteristics. Int. J. Human–Comput. Stud. 53, 251–267 (2000) Google Scholar
  63. 63.
    Jayagopi, D., Hung, H., Yeo, C., Gatica-Perez, D.: Modeling dominance in group conversations from non-verbal activity cues. IEEE Trans. Audio, Speech Language Process. 17(3), 501–513 (2009) Google Scholar
  64. 64.
    Jayagopi, D., Kim, T., Pentland, A., Gatica-Perez, D.: Recognizing conversational context in group interaction using privacy-sensitive mobile sensors. In: ACM International Conference on Mobile and Ubiquitous Multimedia (2010) Google Scholar
  65. 65.
    Jonsdottir, G.R., Thorisson, K.R., Nivel, E.: Learning smooth, human-like turntaking in realtime dialogue. In: Proceedings of the 8th international conference on Intelligent Virtual Agents, pp. 162–175. Springer, Berlin (2008) Google Scholar
  66. 66.
    Kagian, A., Dror, G., Leyvand, T., Meilijson, I., Cohen-Or, D., Ruppin, E.: A machine learning predictor of facial attractiveness revealing human-like psychophysical biases. Vis. Res. 48, 235–243 (2008) Google Scholar
  67. 67.
    Kelley, H.H., Thibaut, J.: Interpersonal Relations: A Theory of Interdependence. Wiley, New York (1978) Google Scholar
  68. 68.
    Keltner, D.: Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement and shame. J. Pers. Soc. Psychol. 68(3), 441–454 (1995) Google Scholar
  69. 69.
    Knapp, M.L., Hall, J.A.: Nonverbal Communication in Human Interaction. Harcourt Brace, New York (1972) Google Scholar
  70. 70.
    Koay, K.L., Syrdal, D.S., Walters, M.L., Dautenhahn, K.: Five weeks in the robot house. In: International Conference on Advances in Computer–Human Interactions (2009) Google Scholar
  71. 71.
    Kopp, S., Stocksmeier, T., Gibbon, D.: Incremental multimodal feedback for conversational agents. In: International Conference on Intelligent Virtual Agents (2007) Google Scholar
  72. 72.
    Leite, I., Mascarenhas, S., Pereira, A., Martinho, C., Prada, R., Paiva, A.: Why can’t we be friends? – an empathic game companion for long-term interaction. In: International Conference on Intelligent Virtual Agents (2010) Google Scholar
  73. 73.
    Lewis, R.L.: Beyond dominance: the importance of leverage. Q. Rev. Biol. 77(2), 149–164 (2002) Google Scholar
  74. 74.
    Mairesse, F., Walker, M.A., Mehl, M.R., Moore, R.K.: Using linguistic cues for the automatic recognition of personality in conversation and text. J. Artif. Intell. Res. 30, 457–500 (2007) zbMATHGoogle Scholar
  75. 75.
    Marsella, S., Gratch, J., Petta, P.: Computational Models of Emotions. Oxford University Press, Oxford (2010) Google Scholar
  76. 76.
    Martin, J., Abrilian, S., Devillers, L., Lamolle, M., Mancini, M., Pelachaud, C.: Levels of representation in the annotation of emotion for the specification of expressivity in ECAs. In: International Conference on Intelligent Virtual Agents (2005) Google Scholar
  77. 77.
    Maynard-Smith, J., Harper, D.G.: Animal signals: Models and terminology. J. Theor. Biol. 177, 305–311 (1995) Google Scholar
  78. 78.
    Maynard-Smith, J., Harper, D.G.: Animal Signals. Oxford University Press, Oxford (2003) Google Scholar
  79. 79.
    McCowan, I., Gatica-Perez, D., Bengio, S., Lathoud, G., Barnard, M., Zhang, D.: Automatic analysis of multimodal group actions in meetings. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 305–317 (2005) Google Scholar
  80. 80.
    Moeslund, T.B., Hilton, A., Krüger, V.: A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 104, 90–126 (2006) Google Scholar
  81. 81.
    Mori, M.: The uncanny valley. Energy 7, 33–35 (1970) Google Scholar
  82. 82.
    Nicolaou, M., Gunes, H., Pantic, M.: Output-associative RVM regression for dimensional and continuous emotion prediction. In: IEEE International Conference on Automatic Face and Gesture Recognition (2011) Google Scholar
  83. 83.
    Och, M., Niewiadomski, R., Pelachaud, C.: Expressions of empathy in ECAs. In: International Conference on Intelligent Virtual Agents (2008) Google Scholar
  84. 84.
    Ochs, M., Niewiadomski, R., Pelachaud, C.: How a virtual agent should smile? morphological and dynamic characteristics of virtual agent’s smiles. In: International Conference on Intelligent Virtual Agents (IVA’10) (2010) Google Scholar
  85. 85.
    Oikonomopoulos, A., Patras, I., Pantic, M.: Discriminative space–time voting for joint recognition and localization of actions. In: International ACM Conference on Multimedia, Workshops (ACM-MM-W’10) (2010) Google Scholar
  86. 86.
    Olguin, D., Gloor, P., Pentland, A.: Capturing individual and group behavior with wearable sensor. In: AAAI Spring Symposium (2009) Google Scholar
  87. 87.
    Owren, M.J., Bachorowski, J.A.: Reconsidering the evolution of nonlinguistic communication: The case of laughter. J. Nonverbal Behav. 27(3), 183–200 (2003) Google Scholar
  88. 88.
    Pantic, M.: Machine analysis of facial behaviour: Naturalistic and dynamic behaviour. Philos. Trans. R. Soc. Lond. B, Biol. Sci. 364, 3505–3513 (2009) Google Scholar
  89. 89.
    Pantic, M., Pentland, A., Nijholt, A., Huang, T.: Human computing and machine understanding of human behavior: A survey. LNAI 4451, 47–71 (2007) Google Scholar
  90. 90.
    Pantic, M., Pentland, A., Nijholt, A., Huang, T.: Human-centred intelligent human–computer interaction (HCI2): How far are we from attaining it? Int. J. Auton. Adapt. Commun. Syst. (IJAACS) 1(2), 168–187 (2008) Google Scholar
  91. 91.
    Partan, S.R., Marter, P.: Communication goes multimodal. Science 283(5406), 1272–1273 (1999) Google Scholar
  92. 92.
    Peirce, C.C.: Collected Chapters. Cambridge University Press, Cambridge (1931–1935) Google Scholar
  93. 93.
    Pelachaud, C., Carofiglio, V., Carolis, B.D., de Rosis, F., Poggi, I.: Embodied contextual agent in information delivering application. In: International Conference on Autonomous Agents and Multiagent Systems, pp. 758–765 (2002) Google Scholar
  94. 94.
    Pentland, A.: Social dynamics: Signals and behavior. In: International Conference Developmental Learning (2004) Google Scholar
  95. 95.
    Pentland, A.: Socially aware computation and communication. IEEE Comput. 38(3), 33–40 (2005) Google Scholar
  96. 96.
    Pentland, A.: Social signal processing. IEEE Signal Process. Mag. 24(4), 108–111 (2007) Google Scholar
  97. 97.
    Pianesi, F., Mana, N., Cappelletti, A.: Multimodal recognition of personality traits in social interactions. In: International Conference on Multimodal Interfaces, pp. 53–60 (2008) Google Scholar
  98. 98.
    Pianesi, F., Zancanaro, M., Not, E., Leonardi, C., Falcon, V., Lepri, B.: Multimodal support to group dynamics. Pers. Ubiquitous Comput. 12(3), 181–195 (2008) Google Scholar
  99. 99.
    Poggi, I.: Mind, Hands, Face and Body: Goal and Belief View of Multimodal Communication. Weidler, Berlin (2007) Google Scholar
  100. 100.
    Poggi, I., D’Errico, F.: Cognitive modelling of human social signals. In: Social Signal Processing Workshop, in Conjunction with International Conference on Multimedia (2010) Google Scholar
  101. 101.
    Raducanu, B., Gatica-Perez, D.: Inferring competitive role patterns in reality TV show through nonverbal analysis. Multimedia Tools Appl. (2010) Google Scholar
  102. 102.
    Rendall, D., Owren, M.J., Ryan, M.J.: What do animal signals mean? Anim. Behav. 78(2), 233–240 (2009) Google Scholar
  103. 103.
    Richmond, V.P., McCroskey, J.C.: Nonverbal Behaviors in Interpersonal Relations. Allyn & Bacon, Needham Heights (1995) Google Scholar
  104. 104.
    Russell, J.A., Bachorowski, J.A., Fernandez-Dols, J.M.: Facial and vocal expressions of emotion. Annu. Rev. Psychol. 54(1), 329–349 (2003) Google Scholar
  105. 105.
    Ruttkay, Z., Pelachaud, C.: From Brows to Trust: Evaluating Embodied Conversational Agents. Kluwer Academic, Norwell (2004) zbMATHGoogle Scholar
  106. 106.
    Sacks, H., Schegloff, E.A., Jefferson, G.: A simplest systematics for the organization of turn taking for conversation. Language 50(4), 696–735 (1974) Google Scholar
  107. 107.
    Salamin, H., Favre, S., Vinciarelli, A.: Automatic role recognition in multiparty recordings: Using social affiliation networks for feature extraction. IEEE Trans. Multimedia 11(7), 1373–1380 (2009) Google Scholar
  108. 108.
    Scheflen, A.E.: The significance of posture in communication systems. Psychiatry 27, 316–331 (1964) Google Scholar
  109. 109.
    Scherer, K.R.: Personality inference from voice quality: The loud voice of extroversion. Eur. J. Soc. Psychol. 8(4), 467–487 (1978) Google Scholar
  110. 110.
    Scherer, K.R.: What does facial expression express? In: Strongman, K.T. (ed.) International Review of Studies of Emotion, vol. 2, pp. 139–165. Wiley, New York (1992) Google Scholar
  111. 111.
    Schmid, K., Marx, D., Samal, A.: Computation of face attractiveness index based on neoclassic canons, symmetry and golden ratio. Pattern Recogn. 41, 2710–2717 (2008) Google Scholar
  112. 112.
    Schröder, M.: Expressive Speech Synthesis: Past, Present, and Possible Futures. In: Tao, J., Tan, T. (eds.) Affective Information Processing, pp. 111–126. Springer, Berlin? (2009) Google Scholar
  113. 113.
    Segerstrale, U., Molnar, P.: Nonverbal Communication: Where Nature Meets Culture. Lawrence Erlbaum Associates, Lawrence (1997) Google Scholar
  114. 114.
    Shannon, C.E., Weaver, W.: The Mathematical Theory of Information. University of Illinois Press, Champaign (1949) Google Scholar
  115. 115.
    ter Maat, M., Heylen, D.: Turn management or impressions management? In: International Conference on Intelligent Virtual Agents, pp. 467–473 (2009) Google Scholar
  116. 116.
    Thorndike, E.L.: Intelligence and its use. Harper’s Mag. 140, 227–235 (1920) Google Scholar
  117. 117.
    Tomkins, S.S.: Consiousness, Imagery and Affect vol. 1. Springer, Berlin (1962) Google Scholar
  118. 118.
    Triandis, H.C.: Culture and Social Behavior. McGraw-Hill, New York (1994) Google Scholar
  119. 119.
    Trouvain, J., Schröder, M.: How (not) to add laughter to synthetic speech. Lect. Notes Comput. Sci. 3068, 229–232 (2004) Google Scholar
  120. 120.
    Valstar, M.F., Gunes, H., Pantic, M.: How to distinguish posed from spontaneous smiles using geometric features. In: International Conference Multimodal Interfaces, pp. 38–45 (2007) Google Scholar
  121. 121.
    Valstar, M.F., Pantic, M., Ambadar, Z., Cohn, J.F.: Spontaneous vs. posed facial behaviour: Automatic analysis of brow actions. In: International Conference Multimodal Interfaces, pp. 162–170 (2006) Google Scholar
  122. 122.
    Verhencamp, S.L.: Handicap, Index, and Conventional Signal Elements of Bird Song. In: Edpmark, Y., Amundsen, T., Rosenqvist, G. (eds.) Animal Signals: Signalling and Signal Design in Animal Communication, pp. 277–300. Tapir Academic Press, Trondheim (2000) Google Scholar
  123. 123.
    Vinciarelli, A.: Capturing order in social interactions. IEEE Signal Process. Mag. 26(5), 133–137 (2009) Google Scholar
  124. 124.
    Vinciarelli, A., Pantic, M., Bourlard, H., Pentland, A.: Social signal processing: State-of-the-art and future perspectives of an emerging domain. In: International Conference Multimedia, pp. 1061–1070 (2008) Google Scholar
  125. 125.
    Vinciarelli, A., Pantic, M., Bourlard, H.: Social signal processing: Survey of an emerging domain. Image Vis. Comput. 27(12), 1743–1759 (2009) Google Scholar
  126. 126.
    Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F., Schröder, M.: Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Trans. Affect. Comput. (2012, in press) Google Scholar
  127. 127.
    Wang, N., Johnson, W.L., Rizzo, P., Shaw, E., Mayer, R.E.: Experimental evaluation of polite interaction tactics for pedagogical agents. In: International Conference Intelligent User Interfaces, pp. 12–19 (2005) Google Scholar
  128. 128.
    Weiser, M.: The computer for the 21st century. Sci. Am. Special Issue on Communications, Computers, and Networks 265(3), 95–104 (1991) Google Scholar
  129. 129.
    Whitehill, J., Movellan, J.: Personalized facial attractiveness prediction. In: IEEE International Conference on Automatic Face and Gesture Recognition (2008) Google Scholar
  130. 130.
    Woodworth, R.S.: Dynamics of Behavior. Holt, New York (1961) Google Scholar
  131. 131.
    Zahavi, A.: Mate selection: selection for a handicap. J. Theor. Biol. 53, 205–214 (1975) Google Scholar
  132. 132.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.H.: A survey of affect recognition methods: Audio, visual and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009) Google Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  • Maja Pantic
    • 1
    • 2
    Email author
  • Roderick Cowie
    • 3
  • Francesca D’Errico
    • 4
  • Dirk Heylen
    • 2
  • Marc Mehu
    • 5
  • Catherine Pelachaud
    • 6
  • Isabella Poggi
    • 4
  • Marc Schroeder
    • 7
  • Alessandro Vinciarelli
    • 8
    • 9
  1. 1.Computing Dept.Imperial College LondonLondonUK
  2. 2.EEMCSUniversity of TwenteEnschedeThe Netherlands
  3. 3.Psychology Dept.Queen University BelfastBelfastUK
  4. 4.Dept. Of EducationUniversity Roma TreRomeItaly
  5. 5.Psychology Dept.University of GenevaGenevaSwitzerland
  6. 6.CNRSParisFrance
  7. 7.DFKISaarbruckenGermany
  8. 8.Computing Science Dept.University of GlasgowGlasgowUK
  9. 9.IDIAP Research InstituteMartignySwitzerland

Personalised recommendations