Modelling Users’ Affect in Job Interviews: Technological Demo

  • Kaśka Porayska-Pomsta
  • Keith Anderson
  • Ionut Damian
  • Tobias Baur
  • Elisabeth André
  • Sara Bernardini
  • Paola Rizzo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7899)

Abstract

This demo presents an approach to recognising and interpreting social cues-based interactions in computer-enhanced job interview simulations. We show what social cues and complex mental states of the user are relevant in this interaction context, how they can be interpreted using static Bayesian Networks, and how they can be recognised automatically using state-of-the-art sensor technology in real-time.

Keywords

social signal processing complex mental states modelling job interviews bayesian inference 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Curhan, J., Pentland, A.: Thin slices of negotiation: predicting outcomes from conversational dynamics within the first 5 minutes (2007)Google Scholar
  2. 2.
    Arvey, R.D., Campion, J.E.: The employment interview: A summary and review of recent research. Personnel Psychology 35(2), 281–322 (1982)CrossRefGoogle Scholar
  3. 3.
    Bernardini, S., Porayska-Pomsta, K., Smith, T.J., Avramides, K.: Building autonomous social partners for autistic children. In: Nakano, Y., Neff, M., Paiva, A., Walker, M. (eds.) IVA 2012. LNCS, vol. 7502, pp. 46–52. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  4. 4.
    Vala, M., Sequeira, P., Paiva, A., Aylett, R.: Fearnot! demo: a virtual environment with synthetic characters to help bullying. In: Proc. 6th Intl. Joint Conf. on Autonomous Agents and Multiagent Systems, AAMAS 2007, pp. 271:1–271:2. ACM, New York (2007)Google Scholar
  5. 5.
    Vogt, T., Andre, E.: Comparing feature sets for acted and spontaneous speech in view of automatic emotion recognition. In: IEEE Intl. Conf. on Multimedia and Expo, ICME 2005, pp. 474–477 (July 2005)Google Scholar
  6. 6.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(1), 39–58 (2009)CrossRefGoogle Scholar
  7. 7.
    Kapoor, A., Picard, R.W.: Multimodal affect recognition in learning environments. In: Proc. 13th Annual ACM Intl. Conf. on Multimedia, MULTIMEDIA 2005, pp. 677–682. ACM, New York (2005)CrossRefGoogle Scholar
  8. 8.
    Kleinsmith, A., Bianchi-Berthouze, N.: Form as a cue in the automatic recognition of non-acted affective body expressions. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part I. LNCS, vol. 6974, pp. 155–164. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  9. 9.
    Wagner, J., Lingenfelser, F., André, E.: The social signal interpretation framework (SSI) for real time signal processing and recognition. In: Proc. Interspeech 2011 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Kaśka Porayska-Pomsta
    • 1
  • Keith Anderson
    • 2
  • Ionut Damian
    • 3
  • Tobias Baur
    • 3
  • Elisabeth André
    • 3
  • Sara Bernardini
    • 4
  • Paola Rizzo
    • 1
  1. 1.London Knowledge LabInstitute of EducationLondonUK
  2. 2.Tandemis LimitedLondonUK
  3. 3.Human Centered MultimediaAugsburg UniversityAugsburgGermany
  4. 4.Department of InformaticsKing’s College LondonUK

Personalised recommendations