Moody Music Generator: Characterising Control Parameters Using Crowdsourcing

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9027)

Abstract

We characterise the expressive effects of a music generator capable of varying its moods through two control parameters. The two control parameters were constructed on the basis of existing work on valence and arousal in music, and intended to provide control over those two mood factors. In this paper we conduct a listener study to determine how people actually perceive the various moods the generator can produce. Rather than directly attempting to validate that our two control parameters represent arousal and valence, instead we conduct an open-ended study to crowd-source labels characterising different parts of this two-dimensional control space. Our aim is to characterise perception of the generator’s expressive space, without constraining listeners’ responses to labels specifically aimed at validating the original arousal/valence motivation. Subjects were asked to listen to clips of generated music over the Internet, and to describe the moods with free-text labels. We find that the arousal parameter does roughly map to perceived arousal, but that the nominal “valence” parameter has strong interaction with the arousal parameter, and produces different effects in different parts of the control space. We believe that the characterisation methodology described here is general and could be used to map the expressive range of other parameterisable generators.

References

  1. 1.
    Yannakakis, G.N., Togelius, J.: Experience-driven procedural content generation. IEEE Trans. Affect. Comput. 2(3), 147–161 (2011)CrossRefGoogle Scholar
  2. 2.
    Wooller, R., Brown, A.R., Miranda, E., Diederich, J., Berry, R.: A framework for comparison of process in algorithmic music systems. In: Generative Arts Practice 2005 – A Creativity & Cognition Symposium (2005)Google Scholar
  3. 3.
    Birchfield, D.: Generative model for the creation of musical emotion, meaning, and form. In: Proceedings of the 2003 ACM SIGMM Workshop on Experiential Telepresence, pp. 99–104 (2003)Google Scholar
  4. 4.
    Eladhari, M., Nieuwdorp, R., Fridenfalk, M.: The soundtrack of your mind: mind music-adaptive audio for game characters. In: Proceedings of Advances in Computer Entertainment Technology (2006)Google Scholar
  5. 5.
    Livingstone, S.R., Brown, A.R.: Dynamic response: Real-time adaptation for music emotion. In: Proceedings of the 2nd Australasian Conference on Interactive Entertainment, pp. 105–111 (2005)Google Scholar
  6. 6.
    Lazarus, R.S.: Emotion and Adaptation. Oxford University Press, New York (1991)Google Scholar
  7. 7.
    Brewin, C.R.: Cognitive change processes in psychotherapy. Psychol. Rev. 96(3), 379 (1989)CrossRefGoogle Scholar
  8. 8.
    Lerner, J.S., Keltner, D.: Beyond valence: toward a model of emotion-specific influences on judgement and choice. Cogn. Emot. 14(4), 473–493 (2000)CrossRefGoogle Scholar
  9. 9.
    Martin, B.A.: The influence of gender on mood effects in advertising. Psychol. Mark. 20(3), 249–273 (2003)CrossRefGoogle Scholar
  10. 10.
    Batson, C.D., Shaw, L.L., Oleson, K.C.: Differentiating affect, mood, and emotion: Toward functionally based conceptual distinctions (1992)Google Scholar
  11. 11.
    Beedie, C., Terry, P., Lane, A.: Distinctions between emotion and mood. Cogn. Emot. 19(6), 847–878 (2005)CrossRefGoogle Scholar
  12. 12.
    Katayose, H., Imai, M., Inokuchi, S.: Sentiment extraction in music. In: Proceedings of the 9th International Conference on Pattern Recognition, pp. 1083–1087 (1988)Google Scholar
  13. 13.
    Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)CrossRefGoogle Scholar
  14. 14.
    Thayer, R.E.: The Biopsychology of Mood and Arousal. Oxford University Press, New York (1989)Google Scholar
  15. 15.
    Kreutz, G., Ott, U., Teichmann, D., Osawa, P., Vaitl, D.: Using music to induce emotions: influences of musical preference and absorption. Psychol. Music 36(1), 101–126 (2008)CrossRefGoogle Scholar
  16. 16.
    Lindström, E., Juslin, P.N., Bresin, R., Williamon, A.: “Expressivity comes from within your soul”: a questionnaire study of music students’ perspectives on expressivity. Res. Stud. Music Educ. 20(1), 23–47 (2003)CrossRefGoogle Scholar
  17. 17.
    Liu, D., Lu, L., Zhang, H.J.: Automatic mood detection from acoustic music data. In: Proceedings of the International Symposium on Music Information Retrieval, pp. 81–7 (2003)Google Scholar
  18. 18.
    Scirea, M.: Mood dependent music generator. In: Reidsma, D., Katayose, H., Nijholt, A. (eds.) ACE 2013. LNCS, vol. 8253, pp. 626–629. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  19. 19.
    Aucouturier, J.J., Pachet, F., Sandler, M.: “the way it sounds”: timbre models for analysis and retrieval of music signals. IEEE Trans. Multimedia 7(6), 1028–1035 (2005)CrossRefGoogle Scholar
  20. 20.
    Bach, C.P.E., Mitchell, W.J., John, W.: Essay on the True Art of Playing Keyboard Instruments. WW Norton, New York (1949)Google Scholar
  21. 21.
    Meyer, L.B.: Emotion and Meaning in Music. University of Chicago Press, Chicago (2008)Google Scholar
  22. 22.
    Trainor, L.J., Heinmiller, B.M.: The development of evaluative responses to music: infants prefer to listen to consonance over dissonance. Infant Behav. Dev. 21(1), 77–88 (1998)CrossRefGoogle Scholar
  23. 23.
    Scirea, M., Cheong, Y.G., Bae, B.C.: Mood expression in real-time computer generated music using pure data. In: Proceedings of the International Conference on Music Perception and Cognition (2014)Google Scholar
  24. 24.
    Livingstone, S.R., Brown, A.R., Muhlberger, R.: Influencing the perceived emotions of music with intent. In: Proceedings of the Third International Conference on Generative Systems in the Electronic Arts (2005)Google Scholar
  25. 25.
    Shaker, N., Yannakakis, G.N., Togelius, J.: Towards automatic personalized content generation for platform games. In: Proceedings of the 2010 Conference on Artificial Intelligence and Interactive Digital Entertainment (2010)Google Scholar
  26. 26.
    Scirea, M., Cheong, Y.G., Bae, B.C., Nelson, M.: Evaluating musical foreshadowing of videogame narrative experiences. In: Proceedings of Audio Mostly (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Marco Scirea
    • 1
  • Mark J. Nelson
    • 2
  • Julian Togelius
    • 3
  1. 1.Center for Computer Games ResearchIT University of CopenhagenCopenhagenDenmark
  2. 2.Anadrome ResearchCopenhagenDenmark
  3. 3.Game Innovation LabNew York UniversityBrooklynUSA

Personalised recommendations