EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data

  • Gilberto Bernardes
  • Carlos Guedes
  • Bruce Pennycook
Conference paper

DOI: 10.1007/978-3-642-41248-6_7

Part of the Lecture Notes in Computer Science book series (LNCS, volume 7900)
Cite this paper as:
Bernardes G., Guedes C., Pennycook B. (2013) EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data. In: Aramaki M., Barthet M., Kronland-Martinet R., Ystad S. (eds) From Sounds to Music and Emotions. CMMR 2012. Lecture Notes in Computer Science, vol 7900. Springer, Berlin, Heidelberg

Abstract

This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system’s machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

Keywords

Concatenative sound synthesis recombination generative music 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Gilberto Bernardes
    • 1
  • Carlos Guedes
    • 2
  • Bruce Pennycook
    • 3
  1. 1.Faculty of EngineeringUniversity of PortoPortugal
  2. 2.School of Music and Performing ArtsPolytechnic of PortoPortugal
  3. 3.University of Texas at AustinUSA

Personalised recommendations