Extraction of Structural Patterns in Popular Melodies

  • Esben Skovenborg
  • Jens Arnspang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2771)

Abstract

A method is presented to extract musical features from melodic material. Various viewpoints are defined to focus on complementary aspects of the material. To model the melodic context, two measures of entropy are employed: A set of trained probabilistic models capture local structures via the information-theoretic notion of unpredictability, and an alternative entropy-measure based on adaptive coding is developed to reflect phrasing or motifs. A collection of popular music, in the form of MIDI-files, is analysed using the entropy-measures and techniques from pattern-recognition. To visualise the topology of the ‘tune-space’, a self-organising map is trained with the extracted feature-parameters, leading to the Tune Map.

Keywords

Melodic similarity musical genre feature extraction entropy self-organising feature map popular music 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alhoniemi, E., Himberg, J., Kiviluoto, K., Parviainen, J., Vesanto, J.: SOM Toolbox Version 1.0. Laboratory of Information and Computer Science, Helsinki University of Technology (1997) Internet WWWpage at, http://www.cis.hut.fi/
  2. 2.
    Aucouturier, J.-J., Pachet, F.: Music Similarity Measures: What’s the Use? In: Proc. of the Third International Conf. on Music Information Retrieval (ISMIR-2002), pp. 157–163 (2002)Google Scholar
  3. 3.
    Bell, T.C., Cleary, J.G., Witten, I.H.: Text Compression. Prentice Hall, Englewood Cliffs (1990)Google Scholar
  4. 4.
    Conklin, D., Cleary, J.G.: Modelling and generating music using multiple viewpoints. In: Proc. of the First Workshop on AI and Music (AAAI 1988), pp. 125–137 (1988)Google Scholar
  5. 5.
    Conklin, D., Witten, I.H.: Multiple viewpoint systems for music prediction. Journal of New Music Research 24(1), 51–73 (1995)CrossRefGoogle Scholar
  6. 6.
    Downie, J.S.: Music Information Retrieval. In: Cronin, B. (ed.) Annual Review of Information Science and Technology, ch. 7, vol. 37, pp. 295–340. Information Today (2003)Google Scholar
  7. 7.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, New York (2000)Google Scholar
  8. 8.
    Foote, J.: An overview of audio information retrieval. Multimedia Systems 7(1), 2–10 (1999)CrossRefGoogle Scholar
  9. 9.
    Handel, S.: Listening: an introduction to the perception of auditory events. The MIT Press, Cambridge (1989)Google Scholar
  10. 10.
    Hewlett, W.B., Selfridge-Field, E. (eds.): Melodic Similarity - Concepts, Procedures and Applications. Computing in Musicology, vol. 11. MIT Press, Cambridge (1998)Google Scholar
  11. 11.
    Howell, P., West, R., Cross, I. (eds.): Representing Musical Structure. Cognitive Science Series. Academic Press, London (1991)Google Scholar
  12. 12.
    Jolliffe, I.T.: Principal Component Analysis. Springer, New York (1986)Google Scholar
  13. 13.
    Kohonen, T.: Self-Organizing Maps, 2nd edn. Springer, Berlin (1997)MATHGoogle Scholar
  14. 14.
    Krumhansl, C.L.: Cognitive Foundations of Musical Pitch. Cambridge University Press, Cambridge (1990)Google Scholar
  15. 15.
    Lemström, K.: String Matching Techniques for Music Retrieval, PhD Thesis, Dept. of Computer Science, Report A-2000-04, University of Helsinki (2000) Google Scholar
  16. 16.
    Madsen, L.B., Grøn, J., Krøgholt, D. (eds.): Folkehøjskolens Sangbog, 17th edn., Foreningen for folkehøjskolers forlag, Gylling, Denmark (1997) Google Scholar
  17. 17.
    MPEG Information Technology - Multimedia Content Description Interface - Part 4: Audio (ISO/IEC CD 15938-4, part of MPEG-7), ISO/IEC JTC 1/SC 29/WG 11 (2001) Google Scholar
  18. 18.
    MPEG Requirements Group MPEG-7 Context and Objectives (ISO/IEC JTC1/SC29/WG11, N2460), International Organisation for Standardisation (1998)Google Scholar
  19. 19.
    Pachet, F., Cazaly, D.: A Taxonomy of Musical Genres. In: Proc. of Content-Based Multimedia Information Access Conference (RIAO), Paris (2000)Google Scholar
  20. 20.
    Ponsford, D., Wiggins, G., Mellish, C.: Statistical learning of Harmonic Movement. Journal of New Music Research 28(2) (1999)Google Scholar
  21. 21.
    Rauber, A., Pampalk, E., Merkl, D.: Using psycho-acoustic models and SOMs to create a hierarchical structuring of music by sound similarity. In: Proc. Int. Symposium on Music Information Retrieval (ISMIR), Paris (2002)Google Scholar
  22. 22.
    Shannon, C.E.: A mathematical theory of communication. The Bell System Technical Journal 27, 379–423, 623-656 (1948)MATHMathSciNetGoogle Scholar
  23. 23.
    Shannon, C.E.: Prediction and Entropy of Printed English. In: Sloane, N.J.A., Wyner, A.D. (eds.) Claude Elwood Shannon: Collected Papers. IEEE Press, New York (1993)Google Scholar
  24. 24.
    Skovenborg, E.: Musik Database. Unpubl. project report, Computer Science Dept., University of Copenhagen (Reviewed in Barchager, H. (1997) Datalogisk løsning pået musik pædagogisk problem. Anvendt Viden, 97(3), Videnskabsbutikken, Københavns Universitet) (1997)Google Scholar
  25. 25.
    Skovenborg, E.: Classification and Evolution of Music - Extracting Musical Features from Melodic Material, Master’s Thesis, Computer Science Dept. (DIKU), The University of Copenhagen (2000)Google Scholar
  26. 26.
    Witten, I.H., Manzara, L.C., Conklin, D.: Comparing Human and Computational Models of Music Prediction. Computer Music Journal 18(1), 70–80 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Esben Skovenborg
    • 1
  • Jens Arnspang
    • 2
  1. 1.Department of Computer ScienceUniversity of AarhusÅrhusDenmark
  2. 2.Department of Software and Media TechnologyAalborg University EsbjergEsbjergDenmark

Personalised recommendations