The Pursuit of Happiness in Music: Retrieving Valence with Contextual Music Descriptors

  • José Fornari
  • Tuomas Eerola
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5493)

Abstract

In the study of music emotions, Valence is usually referred to as one of the dimensions of the circumplex model of emotions that describes music appraisal of happiness, whose scale goes from sad to happy. Nevertheless, related literature shows that Valence is known as being particularly difficult to be predicted by a computational model. As Valence is a contextual music feature, it is assumed here that its prediction should also require contextual music descriptors in its predicting model. This work describes the usage of eight contextual (also known as higher-level) descriptors, previously developed by us, to calculate happiness in music. Each of these descriptors was independently tested using the correlation coefficient of its prediction with the mean rating of Valence, reckoned by thirty-five listeners, over a piece of music. Following, a linear model using this eight descriptors was created and the result of its prediction, for the same piece of music, is described and compared with two other computational models from the literature, designed for the dynamic prediction of music emotion. Finally it is proposed here an initial investigation on the effects of expressive performance and musical structure on the prediction of Valence. Our descriptors are then separated in two groups: performance and structural, where, with each group, we built a linear model. The prediction of Valence given by these two models, over two other pieces of music, are here compared with the correspondent listeners’ mean rating of Valence, and the achieved results are depicted, described and discussed.

Keywords

music information retrieval music cognition music emotion 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sloboda, J.A., Juslin, P.: Music and Emotion: Theory and Research. Oxford University Press, Oxford (2001)Google Scholar
  2. 2.
    Ekman, P.: An argument for basic emotions. Cognition & Emotion 6(3/4), 169–200 (1992)CrossRefGoogle Scholar
  3. 3.
    Juslin, P.N., Laukka, P.: Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin (129), 770–814 (2003)Google Scholar
  4. 4.
    Russell, J.A.: Core affect and the psychological construction of emotion. Psychological Review 110(1), 145–172 (2003)CrossRefGoogle Scholar
  5. 5.
    Laukka, P., Juslin, P.N., Bresin, R.: A dimensional approach to vocal expression of emotion. Cognition and Emotion 19, 633–653 (2005)CrossRefGoogle Scholar
  6. 6.
    Scherer, K.R., Zentner, K.R.: Emotional effects of music: production rules. In: Juslin, P.N., Sloboda, J.A. (eds.) Music and emotion: Theory and research, pp. 361–392. Oxford University Press, Oxford (2001)Google Scholar
  7. 7.
    Tzanetakis, G., Cook, P.: Musical Genre Classification of Audio Signals. IEEE Transactions on Speech and Audio Processing 10(5), 293–302 (2002)CrossRefGoogle Scholar
  8. 8.
    Leman, M., Vermeulen, V., De Voogdt, L., Moelants, D., Lesaffre, M.: Correlation of Gestural Musical Audio Cues. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS, vol. 2915, pp. 40–54. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Wu, T.-L., Jeng, S.-K.: Automatic emotion classification of musical segments. In: Proceedings of the 9th International Conference on Music Perception & Cognition, Bologna (2006)Google Scholar
  10. 10.
    Gomez, E., Herrera, P.: Estimating The Tonality Of Polyphonic Audio Files: Cogtive Versus Machine Learning Modelling StrategiesI. Paper presented at the Proceedings of the 5th International ISMIR 2004 Conference, Barcelona, Spain (October 2004)Google Scholar
  11. 11.
    Schubert, E.: Measuring emotion continuously: Validity and reliability of the two-dimensional emotion space. Aust. J. Psychol. 51(3), 154–165 (1999)CrossRefGoogle Scholar
  12. 12.
    Korhonen, M., Clausi, D., Jernigan, M.: Modeling Emotional Content of Music Using System Identification. IEEE Transactions on Systems, Man and Cybernetics 36(3), 588–599 (2006)CrossRefGoogle Scholar
  13. 13.
    Slodoba, J.A.: Individual differences in music performance. Trends in Cognitive Sciences 4(10), 397–403 (2000)CrossRefGoogle Scholar
  14. 14.
    Palmer, C.: Music Performance. Annual Review of Psychology 48, 115–138 (1997)CrossRefGoogle Scholar
  15. 15.
    Gerhard, W., Werner, G.: Computational Models of Expressive Music Performance: The State of the Art. Journal of New Music Research 2004 33(3), 203–216 (2004)Google Scholar
  16. 16.
    Friberg, A., Bresin, R., Sundberg, J.: Overview of the KTH rule system for music performance. Advances in Experimental Psychology, special issue on Music Performance 2(2-3), 145–161 (2006)Google Scholar
  17. 17.
    Todd, N.P.M.: A computational model of Rubato. Contemporary Music Review 3, 69–88 (1989)CrossRefGoogle Scholar
  18. 18.
    Mazzola, G., Göller, S.: Performance and interpretation. Journal of New Music Research 31, 221–232 (2002)CrossRefGoogle Scholar
  19. 19.
    Widmer, G., Dixon, S.E., Goebl, W., Pampalk, E., Tobudic, A.: Search of the Horowitz factor. AI Magazine 24, 111–130 (2003)Google Scholar
  20. 20.
    Hevner, K.: Experimental studies of the elements of expression in music. American Journal of Psychology 48, 246–268 (1936)CrossRefGoogle Scholar
  21. 21.
    Gagnon, L., Peretz, I.: Mode and tempo relative contributions to “happy - sad” judgments in equitone melodies. Cognition and Emotion 17, 25–40 (2003)CrossRefGoogle Scholar
  22. 22.
    Dalla Bella, S., Peretz, I., Rousseau, L., Gosselin, N.: A developmental study of the affective value of tempo and mode in music. Cognition 80(3), B1–B10 (2001)CrossRefGoogle Scholar
  23. 23.
    Juslin, P.N.: Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26(6), 1797–1813 (2000)CrossRefGoogle Scholar
  24. 24.
    Bresin, R., Battel, G.: Articulation strategies in expressive piano performance. Journal of New Music Research 29(3), 211–224 (2000)CrossRefGoogle Scholar
  25. 25.
    BeeSuan, O.: Towards Automatic Music Structural Analysis: Identifying Characteristic Within-Song Excerpts in Popular Music. Doctorate dissertation. Department of Technology, University Pompeu Fabra (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • José Fornari
    • 1
  • Tuomas Eerola
    • 2
  1. 1.Interdisciplinary Nucleus for Sound Communication (NICS)University of Campinas (Unicamp)Brazil
  2. 2.Music DepartmentUniversity of Jyvaskyla (JYU)Finland

Personalised recommendations