Ontological Representation of Audio Features

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9982)


Feature extraction algorithms in Music Informatics aim at deriving statistical and semantic information directly from audio signals. These may be ranging from energies in several frequency bands to musical information such as key, chords or rhythm. There is an increasing diversity and complexity of features and algorithms in this domain and applications call for a common structured representation to facilitate interoperability, reproducibility and machine interpretability. We propose a solution relying on Semantic Web technologies that is designed to serve a dual purpose (1) to represent computational workflows of audio features and (2) to provide a common structure for feature data to enable the use of Open Linked Data principles and technologies in Music Informatics. The Audio Feature Ontology is based on the analysis of existing tools and music informatics literature, which was instrumental in guiding the ontology engineering process. The ontology provides a descriptive framework for expressing different conceptualisations of the audio feature extraction domain and enables designing linked data formats for representing feature data. In this paper, we discuss important modelling decisions and introduce a harmonised ontology library consisting of modular interlinked ontologies that describe the different entities and activities involved in music creation, production and publishing.


Semantic audio analysis Music Information Retrieval Linked open data Semantic Web technologies 



This work was supported by EPSRC Grant EP/ L019981/1, “Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption” and the European Commission H2020 research and innovation grant AudioCommons (688382). Sandler acknowledges the support of the Royal Society as a recipient of a Wolfson Research Merit Award.


  1. 1.
    Fazekas, G., Raimond, Y., Jakobson, K., Sandler, M.: An overview of semantic web activities in the OMRAS2 project. J. New Music Res. (JNMR) 39(4), 295–311 (2010)CrossRefGoogle Scholar
  2. 2.
    Fields, B., Page, K., De Roure, D., Crawford, T.: The segment ontology: bridging music-generic and domain-specific. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 11–15 July 2011, Barcelona, Spain (2011)Google Scholar
  3. 3.
    Humphrey, E.J., Salamon, J., Nieto, O., Forsyth, J., Bittner, R., Bello, J.P.: JAMS: a JSON annotated music specification for reproducible MIR research. In: Proceedings of the 15th International Society for Music Information Retrieval Conference, Taipei, Taiwan (2014)Google Scholar
  4. 4.
    Lanthaler, M., Gütl, C.: On using JSON-LD to create evolvable RESTful services. In: Proceedings of the 3rd International Workshop on RESTful Design at WWW 2012 (2012)Google Scholar
  5. 5.
    Mitrovic, D., Zeppelzauer, M., Breiteneder, C.: Features for content-based audio retrieval. In: Advances in Computers, vol. 78, pp. 71–150 (2010)Google Scholar
  6. 6.
    Moffat, D., Ronan, D., Reiss, J.D.: An evaluation of audio feature extraction toolboxes. In: Proceedings of the 18th International Conference on Digital Audio Effects (DAFx-15), Trondheim, Norway (2015)Google Scholar
  7. 7.
    Raimond, Y., Abdallah, S., Sandler, M., Giasson, F.: The music ontology. In: Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR 2007), 23–27 September, Vienna, Austria (2007)Google Scholar
  8. 8.
    Thalmann, F., Carillo, A.P., Fazekas, G., Wiggins, G.A., Sandler, M.: The mobile audio ontology, experiencing dynamic music objects on mobile devices. In: Proceedings of the 10th IEEE International Conference on Semantic Computing, Laguna Hills, CA, USA (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Queen Mary University of LondonLondonUK

Personalised recommendations