MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

  • Radoslaw Niewiadomski
  • Maurizio Mancini
  • Tobias Baur
  • Giovanna Varni
  • Harry Griffin
  • Min S. H. Aung
Conference paper

DOI: 10.1007/978-3-319-02714-2_16

Volume 8212 of the book series Lecture Notes in Computer Science (LNCS)
Cite this paper as:
Niewiadomski R., Mancini M., Baur T., Varni G., Griffin H., Aung M.S.H. (2013) MMLI: Multimodal Multiperson Corpus of Laughter in Interaction. In: Salah A.A., Hung H., Aran O., Gunes H. (eds) Human Behavior Understanding. HBU 2013. Lecture Notes in Computer Science, vol 8212. Springer, Cham

Abstract

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data.

In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Radoslaw Niewiadomski
    • 1
  • Maurizio Mancini
    • 1
  • Tobias Baur
    • 2
  • Giovanna Varni
    • 1
  • Harry Griffin
    • 3
  • Min S. H. Aung
    • 3
  1. 1.Università degli Studi di GenovaGenovaItaly
  2. 2.Augsburg UniversityAugsburgGermany
  3. 3.University College LondonLondonUK