A New Multi-modal Dataset for Human Affect Analysis

  • Haolin Wei
  • David S. Monaghan
  • Noel E. O’Connor
  • Patricia Scanlon
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8749)

Abstract

In this paper we present a new multi-modal dataset of spontaneous three way human interactions. Participants were recorded in an unconstrained environment at various locations during a sequence of debates in a video conference, Skype style arrangement. An additional depth modality was introduced, which permitted the capture of 3D information in addition to the video and audio signals. The dataset consists of 16 participants and is subdivided into 6 unique sections. The dataset was manually annotated on a continuously scale across 5 different affective dimensions including arousal, valence, agreement, content and interest. The annotation was performed by three human annotators with the ensemble average calculated for use in the dataset. The corpus enables the analysis of human affect during conversations in a real life scenario. We first briefly reviewed the existing affect dataset and the methodologies related to affect dataset construction, then we detailed how our unique dataset was constructed.

Keywords

Spontaneous affect dataset Continuous annotation Multi-modal Depth Affect recognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)CrossRefGoogle Scholar
  2. 2.
    Cowie, R., Schröder, M.: Piecing together the emotion jigsaw. In: Bengio, S., Bourlard, H. (eds.) MLMI 2004. LNCS, vol. 3361, pp. 305–317. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing 3(1), 5–17 (2012)CrossRefGoogle Scholar
  4. 4.
    Ringeval, F., Sonderegger, A., Sauer, J., Lalanne, D.: Introducing the recola multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–8 (April 2013)Google Scholar
  5. 5.
    Cowie, R., Douglas-Cowie, E., Martin, J.-C., Devillers, L.: The essential role of human databases for learning in and validation of affectively competent agents. OUP, Oxford (2010)Google Scholar
  6. 6.
    Sandbach, G., Zafeiriou, S., Pantic, M., Yin, L.: Static and dynamic 3d facial expression recognition: A comprehensive survey. Image Vision Comput. 30(10), 683–697 (2012)CrossRefGoogle Scholar
  7. 7.
    Kleinsmith, A., Bianchi-Berthouze, N.: Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing 4(1), 15–33 (2013)CrossRefGoogle Scholar
  8. 8.
    Aggarwal, J.K., Xia, L.: Human activity recognition from 3d data: A review. Pattern Recognition Letters (2014)Google Scholar
  9. 9.
    Pantic, M., Bartlett, M.S.: Machine analysis of facial expressions. In: Delac, K., Grgic, M. (eds.) Face Recognition, pp. 377–416. I-Tech Education and Publishing, Vienna (2007)Google Scholar
  10. 10.
    Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. 371–377. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  11. 11.
    Afzal, S., Robinson, P.: Natural affect data; collection and annotation in a learning context. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, pp. 1–7 (September 2009)Google Scholar
  12. 12.
    Soleymani, M., Lichtenauer, J., Pun, T., Pantic, M.: A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3(1), 42–55 (2012)CrossRefGoogle Scholar
  13. 13.
    Mahmoud, M., Baltrušaitis, T., Robinson, P., Riek, L.D.: 3D corpus of Spontaneous Complex Mental States. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part I. LNCS, vol. 6974, pp. 205–214. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Gunes, H., Schuller, B.: Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image Vision Comput. 31(2), 120–136 (2013)CrossRefGoogle Scholar
  15. 15.
    Yin, L., Chen, X., Sun, Y., Worm, T., Reale, M.: A high-resolution 3d dynamic facial expression database. In: 8th IEEE International Conference on Automatic Face Gesture Recognition, FG 2008, pp. 1–6 (September 2008)Google Scholar
  16. 16.
    Schuller, B., Valstar, M., Eyben, F., McKeown, G., Cowie, R., Pantic, M.: AVEC 2011–the first international audio/visual emotion challenge. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part II. LNCS, vol. 6975, pp. 415–424. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  17. 17.
    Schuller, B., Valstar, M., Cowie, R., Pantic, M.: AVEC 2012: The continuous audio/visual emotion challenge - an introduction. In: Proceedings of the 14th ACM International Conference on Multimodal Interaction, ICMI 2012, pp. 361–362. ACM, New York (2012)Google Scholar
  18. 18.
    Zhang, X., Yin, L., Cohn, J.F., Canavan, S., Reale, M., Horowitz, A., Liu, P.: A high resolution spontaneous 3d dynamic facial expression database. In: Proceedings of 10th IEEE International (2013)Google Scholar
  19. 19.
    Brugman, H., Russel, A.: Annotating multi-media/multi-modal resources with elan. In: LREC (2004)Google Scholar
  20. 20.
    Kipp, M.: Anvil - a generic annotation tool for multimodal dialogue (2001)Google Scholar
  21. 21.
    Schröder, M., Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M.: ‘FEELTRACE’: An Instrument for Recording Perceived Emotion in Real Time. In: Proceedings of the ISCA Workshop on Speech and Emotion: A Conceptual Framework for Research, Belfast, pp. 19–24. Textflow (2000)Google Scholar
  22. 22.
    Cowie, R., Sawey, M.: GTrace-General Trace program from Queen’s, Belfast (2011), https://sites.google.com/site/roddycowie/work-resources (Online; accessed April 29, 2014)
  23. 23.
    Vinciarelli, A., Dielmann, A., Favre, S., Salamin, H.: Canal9: A database of political debates for analysis of social interactions. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, pp. 1–4 (September 2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Haolin Wei
    • 1
  • David S. Monaghan
    • 1
  • Noel E. O’Connor
    • 1
  • Patricia Scanlon
    • 2
  1. 1.Insight Centre for Data AnalyticsDublin City UniversityIreland
  2. 2.Bell Labs IrelandAlcatel Lucent DublinIreland

Personalised recommendations