Encyclopedia of Computer Graphics and Games

Living Edition
| Editors: Newton Lee

Dynamic Music Generation, Audio Analysis-Synthesis Methods

  • Gilberto BernardesEmail author
  • Diogo Cocharro
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-08234-9_211-1

Synonyms

Definitions

Dynamic music generation systems create ever-different and changing musical structures based on formalized computational methods. Under scope is a subset of these methods which adopt musical audio as a strategy to formalize musical structure which then guides higher-level transformations to be synthesized as new musical audio streams.

Introduction

Technologies which adhere to nonlinear, as opposed to fixed, storytelling are becoming pervasive in the digital media landscape (Lister et al. 2003). In this context, methods for dynamically generating music have been prominently and increasingly adopted in games, virtual and augmented reality, interactive installations, and 360 video. Their adoption is motivated by a wide range of factors from computational constraints (e.g., limited memory) to enhanced interaction with external actuators and artistic...

This is a preview of subscription content, log in to check access.

References

  1. Assayag, G., Bloch, G., Chemillier, M., Cont, A., Dubnov, S.: Omax brothers: a dynamic topology of agents for improvisation learning. In: Proceedings of the 1st ACM Workshop on Audio and Music Computing Multimedia, pp. 125–132. ACM (2006)Google Scholar
  2. Bernardes, G., Guedes, C., Pennycook, B.: EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data, pp. 110–129. Springer, Berlin (2013)Google Scholar
  3. Bernardes, G., Aly, L., Davies, M.E.P.: Seed: resynthesizing environmental sounds from examples. In: Proceedings of the Sound and Music Computing Conference (2016)Google Scholar
  4. Davies, M.E.P., Hamel, P., Yoshii, K., Goto, M.: Automashupper: automatic creation of multi-song music mashups. IEEE/ACM Trans. Audio Speech Lang. Process. 22(12), 1726–1737 (2014).  https://doi.org/10.1109/TASLP.2014.2347135. ISSN 2329-9290CrossRefGoogle Scholar
  5. Einbond, A., Schwarz, D., Borghesi, R., Schnell, N.: Introducing catoracle: corpus-based concatenative improvisation with the audio oracle algorithm. In: Proceedings of the International Computer Music Conference, pp. 140–146 (2016). http://openaccess.city.ac.uk/15424/
  6. Frojd, M., Horner, A.: Fast sound texture synthesis using overlap-add. In: International Computer Music Conference (2007)Google Scholar
  7. Jehan, T.: Creating Music by Listening. PhD thesis, Massachusetts Institute of Technology (2005)Google Scholar
  8. Lamere, P.: The infinite jukebox. www.infinitejuke.com (2012). Accessed 24 May 2016
  9. Lister, M., Dovey, J., Giddings, S., Grant, I., Kelly, K.: New Media: A Critical Introduction. Routledge, London (2003)Google Scholar
  10. Nierhaus, G.: Algorithmic Composition: Paradigms of Automated Music Generation. Springer Science & Business Media, Wien (2009)CrossRefGoogle Scholar
  11. Norowi, N.M., Miranda, E.R., Hussin, M.: Parametric factors affecting concatenative sound synthesis. Adv. Sci. Lett. 23(6), 5496–5500 (2017)CrossRefGoogle Scholar
  12. Pachet, F., Roy, P., Moreira, J., d’Inverno, M.: Reflexive loopers for solo musical improvisation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2205–2208. ACM (2013)Google Scholar
  13. Schwarz, D., Schnell, N.: Descriptor-based sound texture sampling. In: Proceedings of the Sound and Music Computing, pp. 510–515 (2010)Google Scholar
  14. Schwarz, D., Beller, G., Verbrugghe, B., Britton, S.: Real-time corpus-based concatenative synthesis with CataRT. pp. 279–282. Montreal (2006)Google Scholar
  15. Surges, G., Dubnov, S.: Feature selection and composition using pyoracle. In: Ninth Artificial Intelligence and Interactive Digital Entertainment Conference, pp. 19 (2013)Google Scholar
  16. Verfaille, V., Arfib, D.: A-dafx: Adaptive digital audio effects. In: Proceedings of the Workshop on Digital Audio Effects, pp. 10–14 (2001)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.INESC TEC and University of Porto, Faculty of EngineeringPortoPortugal