Rhythm and Timbre Analysis for Carnatic Music Processing

  • Rushiraj Heshi
  • S. M. Suma
  • Shashidhar G. Koolagudi
  • Smriti Bhandari
  • K. S. Rao
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 43)


In this work, an effort has been made to analyze rhythm and timbre related features to identify raga and tala from a piece of Carnatic music. Raga and Tala classification is performed using both rhythm and timbre features. Rhythm patterns and rhythm histogram are used as rhythm features. Zero crossing rate (ZCR), centroid, spectral roll-off, flux, entropy are used as timbre features. Music clips contain both instrumental and vocals. To find similarity between the feature vectors T-Test is used as a similarity measure. Further, classification is done using Gaussian Mixture Models (GMM). The results shows that the rhythm patterns are able to distinguish different ragas and talas with an average accuracy of 89.98 and 86.67 % respectively.


Carnatic music Raga Tala Gaussian Mixture Models Rhythm Timbre T-test 


  1. 1.
    Agarwal, P., Karnick, H., Raj, B.: A comparative study of indian and western music forms. In: International Society for Music Information Retrieval (2013)Google Scholar
  2. 2.
    Klapuri, A., Davy, M.: Signal Processing Methods for Music Transcription. Springer, New York Inc., Secaucus (2006)CrossRefGoogle Scholar
  3. 3.
    Orio, N., Piva, R.: Combining timbric and rhythmic features for semantic music tagging. In: International Society for Music Information Retrieval (2013)Google Scholar
  4. 4.
    Pandey, G., Mishra, C., Ipe, P.: Tansen: a system for automatic raga identification pp. 1350–1363 (2003)Google Scholar
  5. 5.
    Bhattacharjee, A., Srinivasan, N.: Hindustani raga representation and identification: a transition probability based approach. IJMBC 2(1–2), 66–91 (2011)Google Scholar
  6. 6.
    P.Kirthika, Chattamvelli, R.: A review of raga based music classification and music information retrieval (mir). IEEE conference on signal processing (2012)Google Scholar
  7. 7.
    Koduri, G., Gulati, S., Rao, P.: A survey of raaga recognition techniques and improvements to the state-of-the-art. SMC (2011)Google Scholar
  8. 8.
    Ranjani, H.G., Arthi, S., Sreenivas, T.V.: Carnatic music analysis: shadja, swara identification and raga verification in alapana using stochastic models. In: IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 29–32 (2011)Google Scholar
  9. 9.
    Tzanetakis, G., Cook, P.: Musical genre classification of audio signals. IEEE Transactions on Speech Audio process. 10(5) (2002)Google Scholar
  10. 10.
    Christopher, R., Kumar, P., Chandy, D.: Audio retrieval using timbral feature. In: IEEE International Conference on Emerging Trends in Computing, Communication and Nanotechnology (2013)Google Scholar

Copyright information

© Springer India 2016

Authors and Affiliations

  • Rushiraj Heshi
    • 1
  • S. M. Suma
    • 2
  • Shashidhar G. Koolagudi
    • 2
  • Smriti Bhandari
    • 1
  • K. S. Rao
    • 3
  1. 1.Walchand College of EngineeringSangliIndia
  2. 2.National Institute of TechnologySurathkalIndia
  3. 3.Indian Institute of TechnologyKharagpurIndia

Personalised recommendations