Skip to main content

Human-Centred Artificial Intelligence in Concatenative Sound Synthesis

  • Chapter
  • First Online:
Handbook of Artificial Intelligence for Music
  • 4814 Accesses

Abstract

Concatenative Sound Synthesis (CSS) is a data-driven method to synthesize new sounds from a large corpus of small sound snippets. It unlocks endless possibilities of re-creating sounds, which is exciting, particularly as the technique also necessitates very little musical knowledge for anyone to utilize. However, synthesizing a specific sound does require more than just matching segments of sound at random. Very few synthesis results are the result of the first raw output. Sometimes, sound manipulation and transformation need to be applied to the synthesized sound. At other times, the desired results can only be achieved by adding or removing certain sound segments in the corpus collection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allamanche, E., Herre, J., Hellmuth, O., Kastner, T., & Ertel, C. (2003, October). A multiple feature model for musical similarity retrieval. In Proceedings of the International Symposium on Music Information Retrieval (ISMIR).

    Google Scholar 

  2. Bailey, C. (2010). A database system for organising musique concrete. In International Computer Music Conference, 2010 (pp. 428–431).

    Google Scholar 

  3. Bernardes, G., Guedes, C., & Pennycook, B. (2013). EarGram: An application for interactive exploration of concatenative sound synthesis in pure data. In From Sounds to Music and Emotions (pp. 110–129).

    Google Scholar 

  4. Bernardes, G. (2014). Composing music by selection: Content-based algorithmic-assisted audio composition. Ph.D. thesis, University of Porto.

    Google Scholar 

  5. Brossier, P. (2006). Automatic annotation of musical audio for interactive applications. Centre for Digital Music, Queen Mary University of London.

    Google Scholar 

  6. Casey, M. A. (2005). Acoustic lexemes for organizing internet audio. Contemporary Music Review, 24(6), 489–508

    Article  Google Scholar 

  7. Chafe, C. (1999). A short history of digital sound synthesis by composers in the USA. Creativity and the Computer, Recontres Musicales Pluridisciplinaires, Lyon.

    Google Scholar 

  8. Cook, P. R. (2002). Real sound synthesis for interactive applications. AK Peters/CRC Press.

    Google Scholar 

  9. David, J. H. (1994). In the two sides of music. https://jackhdavid.thehouseofdavid.com/papers/brain.html.

  10. Frisson, C., Picard, C., Tardieu, D., & Pl-area, F. R. (2010). Audiogarden: Towards a usable tool for composite audio creation. Target, 7, 8

    Google Scholar 

  11. Gates, A., & Bradshaw, J. L. (1977). The role of the cerebral hemispheres in music. Brain and Language, 4(3), 403–431

    Article  Google Scholar 

  12. Gower, J. C. (1985). Measures of similarity, dissimilarity and distance. Encyclopedia of Statistical Sciences, Johnson and CB Read, 5, 397–405

    Google Scholar 

  13. Grey, J. M. (1975). An exploration of musical timbre. Doctoral dissertation, Department of Music, Stanford University.

    Google Scholar 

  14. Hackbarth, B. (2011). Audioguide: A framework for creative exploration of concatenative sound synthesis. IRCAM research report.

    Google Scholar 

  15. Jehan, T. (2005). Creating music by listening. Doctoral dissertation, Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences).

    Google Scholar 

  16. Levitin, D. J. (2006). This is your brain on music: The science of a human obsession. New York: Plume Books.

    Google Scholar 

  17. Loui, P., Wessel, D. L., & Kam, C. L. H. (2010). Humans rapidly learn grammatical structure in a new musical scale. Music Perception, 27(5), 377

    Article  Google Scholar 

  18. Miranda, E. (2012). Computer sound design: Synthesis techniques and programming. Taylor & Francis.

    Google Scholar 

  19. Mitrović, D., Zeppelzauer, M., & Breiteneder, C. (2010). Features for content-based audio retrieval. In Advances in computers (vol. 78. pp. 71–150). Elsevier.

    Google Scholar 

  20. Moffat, D., Selfridge, R., & Reiss, J. D. (2019). Sound effect synthesis. In Foundations in sound design for interactive media: A multidisciplinary approach. Routledge.

    Google Scholar 

  21. Norowi, N. M., Miranda, E. R., & Madzin, H. (2016). Identifying the basis of auditory similarity in concatenative sound synthesis users: A study between musicians and non-musicians.

    Google Scholar 

  22. Norowi, N. M. (2013). An artificial intelligence approach to concatenative sound synthesis.

    Google Scholar 

  23. Nuanáin, C. Ó., Herrera, P., & Jordá, S. (2017). Rhythmic concatenative synthesis for electronic music: Techniques, implementation, and evaluation. Computer Music Journal, 41(2), 21–37

    Article  Google Scholar 

  24. Pellman, S. (1994). An introduction to the creation of electroacoustic music. Wadsworth Publishing Company.

    Google Scholar 

  25. Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33–36

    Article  Google Scholar 

  26. Schwarz, D. (2006). Concatenative sound synthesis: The early years. Journal of New Music Research, 35(1), 3–22

    Article  Google Scholar 

  27. Segalowitz, S. J. (1983). Two sides of the brain. Englewood Cliffs: Prentice Hall.

    Google Scholar 

  28. Sturm, B. L. (2004). MATConcat: An application for exploring concatenative sound synthesis using MATLAB. In Proceedings of Digital Audio Effects (DAFx).

    Google Scholar 

  29. Tolonen, T., Välimäki, V., & Karjalainen, M. (1998). Evaluation of modern sound synthesis methods.

    Google Scholar 

  30. Tran, N. (1999, February). Generating photomosaics: an empirical study. In Proceedings of the 1999 ACM Symposium on Applied Computing (pp. 105–109).

    Google Scholar 

  31. Wessel, D. L. (1979). Timbre space as a musical control structure. Computer Music Journal, 45–52.

    Google Scholar 

  32. Wu, S. C. A. (2017). SuperSampler: A new polyphonic concatenative sampler synthesizer in supercollider for sound motive creating, live coding, and improvisation.

    Google Scholar 

  33. Zils, A., & Pachet, F. (2001, December). Musical mosaicing. In Digital audio effects (DAFx).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noris Mohd Norowi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Norowi, N.M. (2021). Human-Centred Artificial Intelligence in Concatenative Sound Synthesis. In: Miranda, E.R. (eds) Handbook of Artificial Intelligence for Music. Springer, Cham. https://doi.org/10.1007/978-3-030-72116-9_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72116-9_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72115-2

  • Online ISBN: 978-3-030-72116-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics