Advertisement

AUFX-O: Novel Methods for the Representation of Audio Processing Workflows

  • Thomas Wilmering
  • György Fazekas
  • Mark B. Sandler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9982)

Abstract

This paper introduces the Audio Effect Ontology (AUFX-O) building on previous theoretical models describing audio processing units and workflows in the context of music production. We discuss important conceptualisations of different abstraction layers, their necessity to successfully model audio effects, and their application method. We present use cases concerning the use of effects in music production projects and the creation of audio effect metadata facilitating a linked data service exposing information about effect implementations. By doing so, we show how our model facilitates knowledge sharing, reproducibility and analysis of audio production workflows.

Notes

Acknowledgments

This paper is supported by EPSRC Grant EP/ L019981/1, Fusing Audio and Semantic Technologies for Intelligent Music Production and Consumption and the European Commission H2020 research and innovation grant AudioCommons (688382). Mark B. Sandler acknowledges the support of the Royal Society as a recipient of a Wolfson Research Merit Award.

References

  1. 1.
    Brewster, C., Alani, H., Dasmahapatra, S., Wilks, Y.: Data driven ontology evaluation. In: Proceedings of the International Conference on Language Resources and Evaluation, Lisbon, Portugal (2004)Google Scholar
  2. 2.
    Fazekas, G., Sandler, M.B.: The studio ontology framework. In: Proceeding of the 12th International Society for Music Information Retrieval Conference (ISMIR) (2011)Google Scholar
  3. 3.
    Hodgson, R., Keller, P.J., Hodges, J., Spivak, J.: QUDT - quantities, units, dimensions and data types ontologies (2014). http://www.qudt.org/
  4. 4.
    Lebo, T., Sahoo, S., McGuiness, D.: PROV-O: the PROV ontology. W3C recommendation, 30 April 2013. World Wide Web Consortium (2013)Google Scholar
  5. 5.
    Plassard, M.F. (ed.): Functional Requirements For Bibliographic Records : final report/IFLA Study Group on FRBR, vol. 19. K.G. Saur (1998)Google Scholar
  6. 6.
    Raimond, Y., Abdallah, S., Sandler, M., Giasson, F.: The music ontology. In: Proceeding of the International Conference on Music Information Retrieval (ISMIR), Vienna, Austria (2007)Google Scholar
  7. 7.
    Stables, R., Enderby, S., Man, B.D., Fazekas, G., Reiss, J.G.: SAFE: a system for the extraction and retrieval of semantic audio descriptors. In: 15th International Society for Music Information Retrieval Conference (ISMIR), Taipei, Taiwan (2014)Google Scholar
  8. 8.
    Wilmering, T., Fazekas, G.: AUFX-O: the audio effect ontology (2016). http://isophonics.net/content/aufx
  9. 9.
    Wilmering, T., Fazekas, G., Allik, A., Sandler, M.B.: Audio effects data on the semantic web. In: 139th Audio Engineering Society Convention, New York, USA (2015)Google Scholar
  10. 10.
    Wilmering, T., Fazekas, G., Sandler, M.B.: Semantic metadata for music production projects. In: Proceeding of the 1st International Workshop on Semantic Music and Media (SMAM), Sydney, Australia (2013)Google Scholar
  11. 11.
    Zölzer, U. (ed.): DAFX - Digital Audio Effects, 2nd edn. Wiley, New York (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Thomas Wilmering
    • 1
  • György Fazekas
    • 1
  • Mark B. Sandler
    • 1
  1. 1.Centre for Digital Music (C4DM)Queen Mary University of LondonLondonUK

Personalised recommendations