Abstract
The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data.
In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Chapman, A.: Humor and laughter in social interaction and some implications for humor research. In: McGhee, P., Goldstein, J. (eds.) Handbook of Humor Research, vol. 1, pp. 135–157 (1983)
Ruch, W., Ekman, P.: The expressive pattern of laughter. In: Kaszniak, A. (ed.) Emotion, Qualia and Consciousness, pp. 426–443. World Scientific Pub., Tokyo (2001)
Huber, T., Ruch, W.: Laughter as a uniform category? a historic analysis of different types of laughter. In: 10th Congress of the Swiss Society of Psychology, University of Zurich, Switzerland (2007)
Huber, T., Drack, P., Ruch, W.: Sulky and angry laughter: The search for distinct facial displays. In: Banninger-Huber, E., Peham, D. (eds.) Current and Future Perspectives in Facial Expression Research: Topics and Methodical Questions, Universitat Innsbruck, pp. 38–44 (2009)
Urbain, J., Niewiadomski, R., Bevacqua, E., Dutoit, T., Moinet, A., Pelachaud, C., Picart, B., Tilmanne, J., Wagner, J.: AVLaughterCycle: Enabling a virtual agent to join in laughing with a conversational partner using a similarity-driven audiovisual laughter animation. Journal on Multimodal User Interfaces 4(1), 47–58 (2010)
Petridis, S., Martinez, B., Pantic, M.: The mahnob laughter database. Image Vision Comput. 31(2), 186–202 (2013)
Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., Wooters, C.: The ICSI Meeting Corpus. In: Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2003), vol. 1, pp. 364–367 (2003)
Carletta, J.: Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evaluation 41(2), 181–190 (2007)
Truong, K.P., van Leeuwen, D.A.: Automatic discrimination between laughter and speech. Speech Communication 49(2), 144–158 (2007)
Szameitat, D.P., Alter, K., Szameitat, A.J., Wildgruber, D., Sterr, A., Darwin, C.J.: Acoustic profiles of distinct emotional expressions in laughter. Journal of The Acoustical Society of America 126 (2009)
Scherer, S., Schwenker, F., Campbell, N., Palm, G.: Multimodal laughter detection in natural discourses. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds.) Human Centered Robot Systems. Cognitive Systems Monographs, vol. 6, pp. 111–120. Springer, Heidelberg (2009)
Suarez, M.T., Cu, J., Maria, M.S.: Building a multimodal laughter database for emotion recognition. In: Calzolari, N. (ed.) Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC 2012). European Language Resources Association (ELRA), Istanbul (2012)
McKeown, G., Cowie, R., Curran, W., Ruch, W., Douglas-Cowie, E.: ILHAIRE laughter database. In: Proceedings of the LREC Workshop on Corpora for Research on Emotion Sentiment and Social Signals (ES 2012). European Language Resources Association (ELRA), Istanbul (2012)
Wagner, J., Lingenfelser, F., Baur, T., Damian, I., Kistler, F., André, E.: The Social Signal Interpretation (SSI) Framework - Multimodal Signal Processing and Recognition in Real-Time. In: Proceedings of The 21st ACM International Conference on Multimedia, Spain (2013)
Camurri, A., Coletta, P., Varni, G., Ghisio, S.: Developing multimodal interactive systems with EyesWeb XMI. In: Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME 2007), pp. 302–305 (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer International Publishing Switzerland
About this paper
Cite this paper
Niewiadomski, R., Mancini, M., Baur, T., Varni, G., Griffin, H., Aung, M.S.H. (2013). MMLI: Multimodal Multiperson Corpus of Laughter in Interaction. In: Salah, A.A., Hung, H., Aran, O., Gunes, H. (eds) Human Behavior Understanding. HBU 2013. Lecture Notes in Computer Science, vol 8212. Springer, Cham. https://doi.org/10.1007/978-3-319-02714-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-02714-2_16
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02713-5
Online ISBN: 978-3-319-02714-2
eBook Packages: Computer ScienceComputer Science (R0)