Instrument Independent Musical Genre Classification Using Random 3000 ms Segment

  • Ali Cenk Gedik
  • Adil Alpkocak
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3949)


This paper presents a new approach to musical genre classification by using randomly selected maximum 3000 ms long segment of each recording. Recordings are classified into three root genres as jazz, classical and pop. A k-Nearest Neighbor (k-NN) classifier with pitch, harmony and rhythm features, independent from instruments, and a training set of 180 MIDI recordings is used for classification. The classifier is evaluated both quantitatively and cognitively using 45 MIDI recordings from outside the training set and considered for specificity, selectivity and accuracy measures. Experiments have demonstrated the good accuracy of the classifier and its high potential of use.


Classical Music Musical Genre Music Information Retrieval Genre Classification Rhythm Feature 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aucouturier, J.J., Pachet, F.: Representing Musical Genre: A State of the Art. Journal of New Music Research 32(1), 1–12 (2003)CrossRefGoogle Scholar
  2. 2.
    Downie, J.S.: Toward the Scientific Evaluation of Music Information Retrieval Systems. In: Proceedings of the Fourth International Conference on Music Information Retrieval, Baltimore, MD, pp. 25–32 (2003)Google Scholar
  3. 3.
    Tzanetakis, G., Essl, G., Cook, P.: Automatic Musical Genre Classification of Audio Signals. In: Proceedings of the International Symposium on Music Information Retrieval, pp. 205–210 (2001)Google Scholar
  4. 4.
    McKay, C., Fujinaga, I.: Automatic Genre Classification Using Large High-Level Musical Feature Sets. In: 5th International Conference on Music Information Retrieval (2004)Google Scholar
  5. 5.
    Lippens, S., Martens, J., Mulder, T., Tzanetakis, G.: A Comparison of Human and Automatic Musical Genre Classification. In: Proc. IEEE Int. Conf. on Audio, Speech and Signal Processing, Montreal, Canada (2004)Google Scholar
  6. 6.
    Meng, A., Ahrendt, P., Larsen, J.: Improving Music Genre Classification by Short-Time Feature Integration. In: IEEE Int. In: Conf. on Audio, Speech and Signal Processing, Philadelphia, PA, USA (2005)Google Scholar
  7. 7.
    Tzanetakis, G., Ermolinskyi, A., Cook, P.: Pitch Histograms in Audio and Symbolic Music Information Retrieval. In: Proc. 3rd ISMIR, pp. 31–38 (2002)Google Scholar
  8. 8.
    Scheirer, E.D., Watson, R.B., Vercoe, B.L.: On the Perceived Complexity of Short Musical Segments. In: Proc. International Conference on Music Perception and Cognition, Keele, UK (2000)Google Scholar
  9. 9.
    Perrott, D., Gjerdingen, R.O.: Scanning the Dial: An Exploration of Factors in the Identification of Musical Style. Research Notes. Department of Music, Northwestern University, Illinois, USA (1999)Google Scholar
  10. 10.
    Eerola, T., Toiviainen, P.: MIDI Toolbox: MATLAB. In: Tools for Music Research, University of Jyväskylä, Jyväskylä (2004)Google Scholar
  11. 11.
    Temperley, D.: The Cognition of Basic Musical Structures. MIT Press, Cambridge (2001)Google Scholar
  12. 12.
    Pachet, F.: Surprising Harmonies. International Journal on Computing Anticipatory Systems (1999)Google Scholar
  13. 13.
    (Retrieved April 1, 2005),

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ali Cenk Gedik
    • 1
  • Adil Alpkocak
    • 2
  1. 1.Department of MusicologyDokuz Eylul Universityİzmir, NarlıdereTurkey
  2. 2.Department of Computer EngineeringDokuz Eylul UniversityBuca, İzmirTurkey

Personalised recommendations