Advertisement

A Hierarchical Harmonic Mixing Method

  • Gilberto BernardesEmail author
  • Matthew E. P. Davies
  • Carlos Guedes
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11265)

Abstract

We present a hierarchical harmonic mixing method for assisting users in the process of music mashup creation. Our main contributions are metrics for computing the harmonic compatibility between musical audio tracks at small- and large-scale structural levels, which combine and reassess existing perceptual relatedness (i.e., chroma vector similarity and key affinity) and dissonance-based approaches. Underpinning our harmonic compatibility metrics are harmonic indicators from the perceptually-motivated Tonal Interval Space, which we adapt to describe musical audio. An interactive visualization shows hierarchical harmonic compatibility viewpoints across all tracks in a large musical audio collection. An evaluation of our harmonic mixing method shows our adaption of the Tonal Interval Space robustly describes harmonic attributes of musical instrument sounds irrespective of timbral differences and demonstrates that the harmonic compatibility metrics comply with the principles embodied in Western tonal harmony to a greater extent than previous approaches.

Keywords

Music mashup Digital DJ interfaces Audio content analysis Music information retrieval 

Notes

Acknowledgments

This work is supported by national funds through the FCT - Foundation for Science and Technology, I.P., under the project IF/01566/2015.

References

  1. 1.
    Arthurs, Y., Beeston, A.V., Timmers, R.: Perception of isolated chords: Examining frequency of occurrence, instrumental timbre, acoustic descriptors and musical training. Psychol. Music 46(5), 662–681 (2018).  https://doi.org/10.1177/0305735617720834CrossRefGoogle Scholar
  2. 2.
    Bernardes, G., Cocharro, D., Caetano, M., Guedes, C., Davies, M.: A multi-level tonal interval space for modelling pitch relatedness and musical consonance. J. New Music Res. 45(4), 281–294 (2016)CrossRefGoogle Scholar
  3. 3.
    Bernardes, G., Cocharro, D., Guedes, C., Davies, M.E.P.: Harmony generation driven by a perceptually motivated tonal interval space. ACM Comput. Entertain. 14(2), 6 (2016)CrossRefGoogle Scholar
  4. 4.
    Bernardes, G., Davies, M., Guedes, C.: Audio key estimation with adaptive mode bias. In: Proceedings of ICASSP, pp. 316–320 (2017)Google Scholar
  5. 5.
    Bidelman, G.M., Krishnan, A.: Brainstem correlates of behavioral and compositional preferences of musical harmony. Neuroreport 22, 212–216 (2011)CrossRefGoogle Scholar
  6. 6.
    Brent, W.: A timbre analysis and classification toolkit for pure data. In: Proceedings of ICMC, pp. 224–229 (2010)Google Scholar
  7. 7.
    Cook, N.: Harmony, Perspective, and Triadic Cognition. Cambridge University Press, Cambridge (2012)Google Scholar
  8. 8.
    Davies, M., Stark, A., Gouyon, F., Goto, M.: Improvasher: a real-time mashup system for live musical input. In: Proceedings of NIME, pp. 541–544 (2014)Google Scholar
  9. 9.
    Davies, M.E.P., Hamel, P., Yoshii, K., Goto, M.: Automashupper: automatic creation of multi-song music mashups. IEEE Trans. ASLP 22(12), 1726–1737 (2014)Google Scholar
  10. 10.
    Engel, J., et al.: Neural audio synthesis of musical notes with WaveNet autoencoders. In: Proceedings of the 34th International Conference on Machine Learning, pp. 1068–1077 (2017)Google Scholar
  11. 11.
    Euler, L.: Tentamen novae theoriae musicae. Broude (1968/1739)Google Scholar
  12. 12.
    Gebhardt, R., Davies, M., Seeber, B.: Psychoacoustic approaches for harmonic music mixing. Appl. Sci. 6(5), 123 (2016)CrossRefGoogle Scholar
  13. 13.
    Griffin, G., Kim, Y., Turnbull, D.: Beat-sync-mash-coder: A web application for real-time creation of beat-synchronous music mashups. In: Proceedings of ICASSP, pp. 437–440 (2010)Google Scholar
  14. 14.
    Huron, D.: Interval-class content in equally tempered pitch-class sets: common scales exhibit optimum tonal consonance. Music Percept. 11(3), 289–305 (1994)CrossRefGoogle Scholar
  15. 15.
    Hutchinson, W., Knopoff, L.: The acoustic component of western consonance. J. New Music Res. 7(1), 1–29 (1978)Google Scholar
  16. 16.
    Johnson-Laird, P.N., Kang, O.E., Leong, Y.C.: On musical dissonance. Music Percept. 30(1), 19–35 (2012)CrossRefGoogle Scholar
  17. 17.
    Krumhansl, C.L., Kessler, E.J.: Tracing the dynamic changes in perceived tonal organisation in a spatial representation of musical keys. Psychol. Rev. 89, 334–368 (1982)CrossRefGoogle Scholar
  18. 18.
    Lahdelma, I., Eerola, T.: Mild dissonance preferred over consonance in single chord perception. i-Perception (2016).  https://doi.org/10.1177/2041669516655812CrossRefGoogle Scholar
  19. 19.
    Lee, C.L., Lin, Y.T., Yao, Z.R., Lee, F.Y., Wu, J.L.: Automatic mashup creation by considering both vertical and horizontal mashabilities. In: Proceedings of ISMIR, pp. 399–405 (2015)Google Scholar
  20. 20.
    Manovich, L.: The Language of New Media. MIT Press, Cambridge (2001)Google Scholar
  21. 21.
    Mixed in Key: Mashup 2 [software]. http://mashup.mixedinkey.com. Accessed 28 Mar 2017
  22. 22.
    Native Instruments: Traktor pro 2 [software]. https://www.native-instruments.com/en/products/traktor/dj-software/traktor-pro-2/. Accessed on 1 Sep 2017
  23. 23.
    Plazak, J., Huron, D., Williams, B.: Fixed average spectra of orchestral instrument tones. Empirical Musicol. Rev. 5(1), 10–17 (2010)CrossRefGoogle Scholar
  24. 24.
    Roads, C.: Microsound. MIT Press, Cambridge (2004)Google Scholar
  25. 25.
    Schwartz, D.A., Howe, C., Purves, D.: The statistical structure of human speech sounds predicts musical universals. J. Neurosci. 23(18), 7160–7168 (2003)CrossRefGoogle Scholar
  26. 26.
    Sha’ath, I.: Estimation of key in digital music recordings. Master’s thesis, Birkbeck College, University of London (2011)Google Scholar
  27. 27.
    Shiga, J.: Copy-and-persist: the logic of mash-up culture. Crit. Stud. Media Commun. 24(2), 93–114 (2007)CrossRefGoogle Scholar
  28. 28.
    Zwicker, E., Fastl, H.: Psychoacoustics-Facts and Models. Springer, Heidelberg (1990).  https://doi.org/10.1007/978-3-540-68888-4CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Gilberto Bernardes
    • 1
    Email author
  • Matthew E. P. Davies
    • 1
  • Carlos Guedes
    • 1
    • 2
  1. 1.INESC TEC, Sound and Music Computing GroupPortoPortugal
  2. 2.New York University Abu DhabiAbu DhabiUnited Arab Emirates

Personalised recommendations