EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data

  • Gilberto Bernardes
  • Carlos Guedes
  • Bruce Pennycook
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7900)

Abstract

This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system’s machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

Keywords

Concatenative sound synthesis recombination generative music 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Schwarz, D.: Data-driven Concatenative Sound Synthesis. Université Paris 6 – Pierre et Marie Curie. PhD thesis (2004)Google Scholar
  2. 2.
    Zils, A., Pachet, F.: Musical mosaicking. In: Proceedings of the COST G-6 Conference on Digital Audio Effects, Limerick, Ireland (December 2001)Google Scholar
  3. 3.
    Schwarz, D.: A System for Data-driven Concatenative Sound Synthesis. In: Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX 2000), Verona, Italy, pp. 97–102 (2000)Google Scholar
  4. 4.
    Bernardes, G., Peixoto de Pinho, N., Lourenço, S., Guedes, C., Pennycook, B., Oña, E.: The Creative Process Behind Dialogismos I: Theoretical and Technical Considerations. In: Proceedings of the ARTECH 2012 - 6th International Conference on Digital Arts, Faro, Portugal, pp. 2012–2016 (2012)Google Scholar
  5. 5.
    5. Ricard, J.: Towards computational morphological description of sound. PhD Thesis, Universitat Pompeu Fabra, Barcelona, Spain (2004)Google Scholar
  6. 6.
    Schnell, N., Cifuentes, M., Lambert, J.P.: First Steps in Relaxed Real-time Typo-morphological Audio Analysis/Synthesis. In: Proceedings of the Sound and Music Computing Conference, Barcelona (2010)Google Scholar
  7. 7.
    Jehan, T.: Creating Music by Listening. Ph.D. Thesis, M.I.T., MA (2005) Google Scholar
  8. 8.
    Schwarz, D., Cahen, R., Britton, S.: Principles and Applications of Interactive Corpus-based Concatenative Synthesis. In: Journées d’Informatique Musicale (2008)Google Scholar
  9. 9.
    Dixon, S.: An interactive beat tracking and visualization system. In: Proceedings International Computer Music Conference (2001)Google Scholar
  10. 10.
    Schaeffer, P.: Traité des objets musicaux. Le Seuil, Paris (1966)Google Scholar
  11. 11.
    Smalley, D.: Spectro-morphology and Structuring Processes. In: Emmerson, S. (ed.) The Language of Electroacoustic Music, pp. 61–93. Macmillan, London (1986)Google Scholar
  12. 12.
    Thoresen, L., Hedman, A.: Spectromorphological Analysis of Sound Objects: An Adaptation of Pierre Schaeffer’s Typomorphology. Organised Sound 12, 129–141 (2007)CrossRefGoogle Scholar
  13. 13.
    Brent, W.: A Timbre Analysis and Classification Toolkit for Pure Data. In: Proceedings of the International Computer Music Conference (2010)Google Scholar
  14. 14.
    Frisson, C., Picard, C., Tardieu, D.: Audiogarden: Towards a Usable Tool for Composite Audio Creation. QPSR of the Numediart Research Program 3(2) (2010)Google Scholar
  15. 15.
    Sturm, B.: Adaptive Concatenative Sound Synthesis and Its Application to Micromontage Composition. Computer Music Journal 30(3), 46–66 (2006)CrossRefGoogle Scholar
  16. 16.
    Porres, A.T.: Dissonance Model Toolbox in Pure Data. In: Proceedings of the 4th Pure Data Convention, Weimar, Germany (2011)Google Scholar
  17. 17.
    Heyer, L., Kruglyak, S., Yooseph, S.: Exploring Expression Data: Identification and Analysis of Coexpressed Genes. Genome Research 9, 1106–1115 (1999)CrossRefGoogle Scholar
  18. 18.
    Ester, M., Kriegel, H., Sander, J., Xu, X.: A density-based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In: Proceedings of the Knowledge Discovery and Data Mining, pp. 226–231. AAAI Press (1996)Google Scholar
  19. 19.
    Kandogan, E.: Visualizing Multi-dimensional Clusters, Trends, and Outliers using Star Coordinates. In: Proceedings of the Knowledge and Data Mining (2001)Google Scholar
  20. 20.
    Wattenberg, M.: Arc Diagrams: Visualizing Structure in Strings. In: Proceedings of the IEEE Information Visualization Conference (2002)Google Scholar
  21. 21.
    Inselberg, A.: Parallel Coordinates: Visual Multidimensional Geometry and Its Applications. Springer (2009)Google Scholar
  22. 22.
    Barlow, C.: Two essays on theory. Computer Music Journal 11, 44–60 (1987)CrossRefGoogle Scholar
  23. 23.
    Bernardes, G., Guedes, C., Pennycook, B.: Style Emulation of Drum Patterns by Means of Evolutionary Methods and Statistical Analysis. In: Proceedings of the Sound and Music Computing Conference, Barcelona, Spain (2010)Google Scholar
  24. 24.
    Sioros, G., Guedes, C.: Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator. In: Proceedings of the International Conference on New Interfaces for Musical Expression, Oslo, Norway (2011)Google Scholar
  25. 25.
    SoundHack Plugins Bundle, http://soundhack.henfast.com/

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Gilberto Bernardes
    • 1
  • Carlos Guedes
    • 2
  • Bruce Pennycook
    • 3
  1. 1.Faculty of EngineeringUniversity of PortoPortugal
  2. 2.School of Music and Performing ArtsPolytechnic of PortoPortugal
  3. 3.University of Texas at AustinUSA

Personalised recommendations