Skip to main content

Abstract

We describe the MusicMiner system for organizing large collections of music with databionic mining techniques. Visualization based on perceptually motivated audio features and Emergent Self-Organizing Maps enables the unsupervised discovery of timbrally consistent clusters that may or may not correspond to musical genres and artists. We demonstrate the visualization capabilities of the U-Map. An intuitive browsing of large music collections is offered based on the paradigm of topographic maps. The user can navigate the sound space and interact with the maps to play music or show the context of a song.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 159.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • AUCOUTURIER, J.-J. and PACHET F. (2002): Finding songs that sound the same. In Proc. of IEEE Benelux Workshop on Model based Processing and Coding of Audio, 1–8.

    Google Scholar 

  • AUCOUTURIER, J.-J. and PACHET F. (2003): Representing musical genre: a state of art. JNMR, 31(1), 1–8.

    Google Scholar 

  • LI, D., SETHI, I.K., DIMITROVA, N., and MCGEE, T. (2001): Classification of general audio data for content-based retrieval. Pattern Recognition Letters, 22, 533–544.

    Google Scholar 

  • LOGAN, B. and SALOMON, A. (2001): A music similarity function based on signal analysis. In IEEE Intl. Conf. on Multimedia and Expo, 190–194.

    Google Scholar 

  • MCKINNEY, M.F. and BREEBART, J. (2003): Features for audio and music classification. In Proc. ISMIR, 151–158.

    Google Scholar 

  • MIERSWA, I. and MORIK, K. (2005): Automatic feature extraction for classifying audio data. Machine Learning Journal, 58:0, 127–149.

    Google Scholar 

  • MÖRCHEN, F., ULTSCH, A., THIES, M., LÖHKEN, I., NÖCKER, M., STAMM, C., EFTHYMIOU, N., and KÜMMERER, M. (2005): MusicMiner: Visualizing perceptual distances of music as topograpical maps. Technical Report 47, CS Department, University Marburg, Germany.

    Google Scholar 

  • PAMPALK, E., DIXON, S., and WIDMER, G. (2003): On the evaluation of perceptual similarity measures for music. In Intl. Conf. on Digital Audio Effects (DAFx), 6–12.

    Google Scholar 

  • PAMPALK, E., RAUBER, A., and MERKL, D. (2002): Content-based organization and visualization of music archives. In Proc. of the ACM Multimedia, 570–579.

    Google Scholar 

  • TORRENS, M., HERTZOG, P., and ARCOS, J.L. (2004): Visualizing and exploring personal music libraries. In Proc. ISMIR.

    Google Scholar 

  • TZANETAKIS, G. and COOK, P. (2002): Musical genre classification of audio signals. IEEE Transactions on Speech and Audio Processing, 10(5).

    Google Scholar 

  • TZANETAKIS, G., ERMOLINSKYI, A., and COOK, P. (2002): Beyond the queryby-example paradigm: New query interfaces for music. In Proc. ICMC.

    Google Scholar 

  • ULTSCH, A. (1992): Self-organizing neural networks for visualization and classification. In Proc. GfKl, Dortmund, Germany.

    Google Scholar 

  • ULTSCH, A. (2003): Pareto Density Estimation: Probability Density Estimation for Knowledge Discovery. In Proc. GfKl, Cottbus, Germany, 91–102.

    Google Scholar 

  • ULTSCH, A. and MÖRCHEN, F. (2005): ESOM-Maps: tools for clustering, visualization, and classification with Emergent SOM. Technical Report 46, CS Department, University Marburg, Germany.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer Berlin · Heidelberg

About this paper

Cite this paper

Mörchen, F., Ultsch, A., Nöcker, M., Stamm, C. (2006). Visual Mining in Music Collections. In: Spiliopoulou, M., Kruse, R., Borgelt, C., Nürnberger, A., Gaul, W. (eds) From Data and Information Analysis to Knowledge Engineering. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-31314-1_89

Download citation

Publish with us

Policies and ethics