MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

  • Radoslaw Niewiadomski
  • Maurizio Mancini
  • Tobias Baur
  • Giovanna Varni
  • Harry Griffin
  • Min S. H. Aung
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8212)


The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data.

In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.


Kinect Sensor Synchronization Signal Multimodal Data Expressive Pattern Kinect Camera 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chapman, A.: Humor and laughter in social interaction and some implications for humor research. In: McGhee, P., Goldstein, J. (eds.) Handbook of Humor Research, vol. 1, pp. 135–157 (1983)Google Scholar
  2. 2.
    Ruch, W., Ekman, P.: The expressive pattern of laughter. In: Kaszniak, A. (ed.) Emotion, Qualia and Consciousness, pp. 426–443. World Scientific Pub., Tokyo (2001)CrossRefGoogle Scholar
  3. 3.
    Huber, T., Ruch, W.: Laughter as a uniform category? a historic analysis of different types of laughter. In: 10th Congress of the Swiss Society of Psychology, University of Zurich, Switzerland (2007)Google Scholar
  4. 4.
    Huber, T., Drack, P., Ruch, W.: Sulky and angry laughter: The search for distinct facial displays. In: Banninger-Huber, E., Peham, D. (eds.) Current and Future Perspectives in Facial Expression Research: Topics and Methodical Questions, Universitat Innsbruck, pp. 38–44 (2009)Google Scholar
  5. 5.
    Urbain, J., Niewiadomski, R., Bevacqua, E., Dutoit, T., Moinet, A., Pelachaud, C., Picart, B., Tilmanne, J., Wagner, J.: AVLaughterCycle: Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation. Journal on Multimodal User Interfaces 4(1), 47–58 (2010)CrossRefGoogle Scholar
  6. 6.
    Petridis, S., Martinez, B., Pantic, M.: The mahnob laughter database. Image Vision Comput. 31(2), 186–202 (2013)CrossRefGoogle Scholar
  7. 7.
    Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., Wooters, C.: The ICSI Meeting Corpus. In: Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2003), vol. 1, pp. 364–367 (2003)Google Scholar
  8. 8.
    Carletta, J.: Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evaluation 41(2), 181–190 (2007)CrossRefGoogle Scholar
  9. 9.
    Truong, K.P., van Leeuwen, D.A.: Automatic discrimination between laughter and speech. Speech Communication 49(2), 144–158 (2007)CrossRefGoogle Scholar
  10. 10.
    Szameitat, D.P., Alter, K., Szameitat, A.J., Wildgruber, D., Sterr, A., Darwin, C.J.: Acoustic profiles of distinct emotional expressions in laughter. Journal of The Acoustical Society of America 126 (2009)Google Scholar
  11. 11.
    Scherer, S., Schwenker, F., Campbell, N., Palm, G.: Multimodal laughter detection in natural discourses. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Human Centered Robot Systems. Cognitive Systems Monographs, vol. 6, pp. 111–120. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  12. 12.
    Suarez, M.T., Cu, J., Maria, M.S.: Building a multimodal laughter database for emotion recognition. In: Calzolari, N. (ed.) Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC 2012). European Language Resources Association (ELRA), Istanbul (2012)Google Scholar
  13. 13.
    McKeown, G., Cowie, R., Curran, W., Ruch, W., Douglas-Cowie, E.: ILHAIRE laughter database. In: Proceedings of the LREC Workshop on Corpora for Research on Emotion Sentiment and Social Signals (ES 2012). European Language Resources Association (ELRA), Istanbul (2012)Google Scholar
  14. 14.
    Wagner, J., Lingenfelser, F., Baur, T., Damian, I., Kistler, F., André, E.: The Social Signal Interpretation (SSI) Framework - Multimodal Signal Processing and Recognition in Real-Time. In: Proceedings of The 21st ACM International Conference on Multimedia, Spain (2013)Google Scholar
  15. 15.
    Camurri, A., Coletta, P., Varni, G., Ghisio, S.: Developing multimodal interactive systems with EyesWeb XMI. In: Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME 2007), pp. 302–305 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Radoslaw Niewiadomski
    • 1
  • Maurizio Mancini
    • 1
  • Tobias Baur
    • 2
  • Giovanna Varni
    • 1
  • Harry Griffin
    • 3
  • Min S. H. Aung
    • 3
  1. 1.Università degli Studi di GenovaGenovaItaly
  2. 2.Augsburg UniversityAugsburgGermany
  3. 3.University College LondonLondonUK

Personalised recommendations