Advertisement

Analyzing Music to Music Perceptual Contagion of Emotion in Clusters of Survey-Takers, Using a Novel Contagion Interface: A Case Study of Hindustani Classical Music

  • Sanga ChakiEmail author
  • Sourangshu Bhattacharya
  • Raju Mullick
  • Priyadarshi Patnaik
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11265)

Abstract

Music has strong potential to convey and elicit emotions, which are dependent on both context and antecedent stimuli. However, there is little research available on the impact of antecedent musical stimuli on emotion perception in consequent musical pieces, when one listens to a sequence of music clips with insignificant time lag. This work attempts to (a) understand how the perception of one music clip is affected by the perception of its antecedent clip and (b) find if there are any inherent patterns in the way people respond when exposed to music in sequence, with special reference to Hindustani Classical Music (HCM). We call this phenomenon of varying perceptions, the perceptual contagion of emotion in music. Findings suggest, when happy clips are preceded by sad and calm clips, perceived happiness increases. When sad clips are preceded by happy and calm clips, perceived sadness increases. Calm clips are perceived as happy and sad when preceded by happy clips and sad clips respectively. This suggests that antecedent musical stimuli have capacity to influence the perception of music that follows. It is also found that almost 85%–95% of people on average are affected by perceptual contagion – while listening to music in sequence – with varying degrees of influence.

Keywords

Music perception Perceptual contagion of emotion Web based self report surveys Hindustani Classical Music (HCM) 

References

  1. 1.
    Bharatmuni: Natyasastra of Bharata, Chapter 6Google Scholar
  2. 2.
    Berkowitz, L.: On the formation and regulation of anger and aggresion. A cognitive-neoassociationistic analysis. Am. Psychologist. 45(4), 494–503 (1990)CrossRefGoogle Scholar
  3. 3.
    Chordia, P., Rae, A.: Raag recognition using pitch-class and pitch-class dyad distributions. In: ISMIR, pp. 431–436, September 2007Google Scholar
  4. 4.
    Egermann, H., McAdams, S.: Empathy and emotional contagion as a link between recognized and felt emotions in music listening. Music Percept. Interdisc. J. 31(2), 139–156 (2013)CrossRefGoogle Scholar
  5. 5.
    Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)CrossRefGoogle Scholar
  6. 6.
    Frijda, N.H.: The Emotions. Cambridge University Press, New York (1986)Google Scholar
  7. 7.
    Juslin, P.N., Sloboda, J.A.: Music and emotion. Psychol. Music 3, 583–645 (2001)Google Scholar
  8. 8.
    Juslin, P.N.: From mimesis to catharsis: expression, perception, and induction of emotion in music. Musical Commun. 85–115 (2005)Google Scholar
  9. 9.
    Juslin, P.N.: From everyday emotions to aesthetic emotions: towards a unified theory of musical emotions. Phys. Life Rev. 10(3), 235–266 (2013)CrossRefGoogle Scholar
  10. 10.
    Keltner, D., Buswell, B.N.: Embarrassment: its distinct form and appeasement functions. Psychol. Bull. 122(3), 250–270 (1997)CrossRefGoogle Scholar
  11. 11.
    Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129–137 (1982)MathSciNetCrossRefGoogle Scholar
  12. 12.
    MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, no. 14 (1967)Google Scholar
  13. 13.
    Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)CrossRefGoogle Scholar
  14. 14.
    Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the International Conference on Multimedia Information Retrieval, pp. 267–274. ACM, March 2010Google Scholar
  15. 15.
    Schubert, E., Ferguson, S., Farrar, N., Taylor, D., Mcpherson, G.E.: Continuous response to music using discrete emotion faces. In: Proceedings of CMMR (2012)Google Scholar
  16. 16.
    Schubert, E., Ferguson, S., Farrar, N., Taylor, D., McPherson, G.E.: The six emotion-face clock as a tool for continuously rating discrete emotional responses to music. In: Aramaki, M., Barthet, M., Kronland-Martinet, R., Ystad, S. (eds.) CMMR 2012. LNCS, vol. 7900, pp. 1–18. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-41248-6_1CrossRefGoogle Scholar
  17. 17.
    Schubert, E.: Continuous self-report methods (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Sanga Chaki
    • 1
    Email author
  • Sourangshu Bhattacharya
    • 2
  • Raju Mullick
    • 3
  • Priyadarshi Patnaik
    • 3
  1. 1.Advanced Technology Development CenterIITKharagpurIndia
  2. 2.Computer Science and Engineering DepartmentIITKharagpurIndia
  3. 3.Department of Humanities and Social SciencesIITKharagpurIndia

Personalised recommendations