Refinement Strategies for Music Synchronization

  • Sebastian Ewert
  • Meinard Müller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5493)

Abstract

For a single musical work, there often exists a large number of relevant digital documents including various audio recordings, MIDI files, or digitized sheet music. The general goal of music synchronization is to automatically align the multiple information sources related to a given musical work. In computing such alignments, one typically has to face a delicate tradeoff between robustness, accuracy, and efficiency. In this paper, we introduce various refinement strategies for music synchronization. First, we introduce novel audio features that combine the temporal accuracy of onset features with the robustness of chroma features. Then, we show how these features can be used within an efficient and robust multiscale synchronization framework. In addition we introduce an interpolation method for further increasing the temporal resolution. Finally, we report on our experiments based on polyphonic Western music demonstrating the respective improvements of the proposed refinement strategies.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arifi, V., Clausen, M., Kurth, F., Müller, M.: Synchronization of music data in score-, MIDI- and PCM-format. Computing in Musicology 13 (2004)Google Scholar
  2. 2.
    Bartsch, M.A., Wakefield, G.H.: Audio thumbnailing of popular music using chroma-based representations. IEEE Trans. on Multimedia 7(1), 96–104 (2005)CrossRefGoogle Scholar
  3. 3.
    Dannenberg, R., Hu, N.: Polyphonic audio matching for score following and intelligent audio editors. In: Proc. ICMC, San Francisco, USA, pp. 27–34 (2003)Google Scholar
  4. 4.
    Dannenberg, R., Raphael, C.: Music score alignment and computer accompaniment. Special Issue, Commun. ACM 49(8), 39–43 (2006)Google Scholar
  5. 5.
    Dixon, S., Widmer, G.: Match: A music alignment tool chest. In: Proc. ISMIR, London, GB (2005)Google Scholar
  6. 6.
    Fujihara, H., Goto, M., Ogata, J., Komatani, K., Ogata, T., Okuno, H.G.: Automatic synchronization between lyrics and music cd recordings based on viterbi alignment of segregated vocal signals. In: ISM, pp. 257–264 (2006)Google Scholar
  7. 7.
    Goto, M.: Development of the rwc music databaseGoogle Scholar
  8. 8.
    Goto, M., Hashiguchi, H., Nishimura, T., Oka, R.: Rwc music database: Popular, classical and jazz music databases. In: ISMIR (2002)Google Scholar
  9. 9.
    Hu, N., Dannenberg, R., Tzanetakis, G.: Polyphonic audio matching and alignment for music retrieval. In: Proc. IEEE WASPAA, New Paltz, NY (October 2003)Google Scholar
  10. 10.
    Kovar, L., Gleicher, M.: Flexible automatic motion blending with registration curves. In: Proc. 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 214–224. Eurographics Association (2003)Google Scholar
  11. 11.
    Kurth, F., Gehrmann, T., Müller, M.: The cyclic beat spectrum: Tempo-related audio features for time-scale invariant audio identification. In: Proc. ISMIR, Victoria, Canada, pp. 35–40 (2006)Google Scholar
  12. 12.
    Kurth, F., Müller, M., Fremerey, C., Chang, Y., Clausen, M.: Automated synchronization of scanned sheet music with audio recordings. In: Proc. ISMIR, Vienna, AT (2007)Google Scholar
  13. 13.
    Müller, M.: Information Retrieval for Music and Motion. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  14. 14.
    Müller, M., Kurth, F., Clausen, M.: Audio matching via chroma-based statistical features. In: Proc. ISMIR, London, GB (2005)Google Scholar
  15. 15.
    Müller, M., Kurth, F., Damm, D., Fremerey, C., Clausen, M.: Lyrics-based audio retrieval and multimodal navigation in music collections. In: Kovács, L., Fuhr, N., Meghini, C. (eds.) ECDL 2007. LNCS, vol. 4675, pp. 112–123. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  16. 16.
    Müller, M., Kurth, F., Röder, T.: Towards an efficient algorithm for automatic score-to-audio synchronization. In: Proc. ISMIR, Barcelona, Spain (2004)Google Scholar
  17. 17.
    Müller, M., Mattes, H., Kurth, F.: An efficient multiscale approach to audio synchronization. In: Proc. ISMIR, Victoria, Canada, pp. 192–197 (2006)Google Scholar
  18. 18.
    Raphael, C.: A hybrid graphical model for aligning polyphonic audio with musical scores. In: Proc. ISMIR, Barcelona, Spain (2004)Google Scholar
  19. 19.
    Soulez, F., Rodet, X., Schwarz, D.: Improving polyphonic and poly-instrumental music to score alignment. In: Proc. ISMIR, Baltimore, USA (2003)Google Scholar
  20. 20.
    Turetsky, R.J., Ellis, D.P.: Force-Aligning MIDI Syntheses for Polyphonic Music Transcription Generation. In: Proc. ISMIR, Baltimore, USA (2003)Google Scholar
  21. 21.
    Wang, Y., Kan, M.-Y., Nwe, T.L., Shenoy, A., Yin, J.: LyricAlly: automatic synchronization of acoustic musical signals and textual lyrics. In: MULTIMEDIA 2004: Proc. 12th annual ACM international conference on Multimedia, pp. 212–219. ACM Press, New York (2004)CrossRefGoogle Scholar
  22. 22.
    Widmer, G.: Using ai and machine learning to study expressive music performance: project survey and first report. AI Commun. 14(3), 149–162 (2001)MATHGoogle Scholar
  23. 23.
    You, W., Dannenberg, R.: Polyphonic music note onset detection using semi-supervised learning. In: Proc. ISMIR, Vienna, Austria (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Sebastian Ewert
    • 1
  • Meinard Müller
    • 2
  1. 1.Institut für Informatik IIIUniversität BonnBonnGermany
  2. 2.Max-Planck-Institut für InformatikSaarbrückenGermany

Personalised recommendations