Skip to main content

A Causal Rhythm Grouping

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNISA,volume 3310)

Abstract

This paper presents a method to identify segment boundaries in music. The method is based on a hierarchical model; first a features is measured from the audio, then a measure of rhythm is calculated from the feature (the rhythmogram), the diagonal of a self-similarity matrix is calculated from the rhythmogram, and finally the segment boundaries are found on a smoothed novelty measure, calculated from the diagonal of the self-similarity matrix. All the steps of the model have been accompanied with an informal evaluation, and the final system is tested on a variety of rhythmic songs with good results. The paper introduces a new feature that is shown to work significantly better than previously used features, a robust rhythm model and a robust, relatively cheap method to identify structure from the novelty measure.

Keywords

  • Segment Boundary
  • Audio Feature
  • Kernel Correlation
  • Rhythm Pattern
  • High Frequency Content

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-540-31807-1_6
  • Chapter length: 13 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   74.99
Price excludes VAT (USA)
  • ISBN: 978-3-540-31807-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andersen, T.M.H.: Towards novel DJ interfaces. In: Proceedings of the New Interfaces for Musical expression, pp. 30–35 (2003)

    Google Scholar 

  2. Bartsch, M.A., Wakefield, G.H.: To Catch a Chorus: Using Chroma-Based Representations For Audio Thumbnailing. In: Proceedings of the Workshop on Applications of Signal Processing to Audio and Acoustics (CD). IEEE, Los Alamitos (2001)

    Google Scholar 

  3. Casey, M.A., Westner, W.: Separation of Mixed Audio Sources by Independent Subspace Analysis. In: International Computer Music Conference (ICMC), August 2000, pp. 154–161 (2000)

    Google Scholar 

  4. Dannenberg, R.: Listening to ‘Naima’: An Automated Structural Analysis of Music from Recorded Audio. In: Proceedings of the 2002 International Computer Music Conference, San Francisco, pp. 28–34 (2002)

    Google Scholar 

  5. Deliege, I., Melen, P.: Cue abstraction in the representation of musical form. In: Deliège, I., Sloboda, J. (eds.), Perception and cognition of music Hove, East Sussex, England, pp. 387–412. Psychology Press (1997)

    Google Scholar 

  6. Desain, P.: A (de)composable theory of rhythm. Music Perception 9(4), 439–454 (1992)

    Google Scholar 

  7. Dubnov, S., Assayag, G., El-Yaniv, R.: Universal Classification Applied to Musical Sequences. In: Proc. of the International Computer Music Conference, Ann Arbour, Michigan (1998)

    Google Scholar 

  8. Eckmann, J.P., Kamphorst, S.O., Ruelle, D.: Recurrence plots of dynamical systems. Europhys. Lett. 4, 973 (1987)

    CrossRef  Google Scholar 

  9. Foote, J.: Automatic Audio Segmentation using a Measure of Audio Novelty. In: Proceedings of IEEE International Conference on Multimedia and Expo. July 30, vol. I, pp. 452–455 (2000)

    Google Scholar 

  10. Foote, J.: Visualizing Music and Audio using Self-Similarity. In: Proceedings of ACM Multimedia, Orlando, Florida, pp. 77–80 (1999)

    Google Scholar 

  11. Goto, M., Muraoka, Y.: A real-time beat tracking system for audio signals. In: Proceedings of the International Computer Music Conference, pp. 171–174 (1995)

    Google Scholar 

  12. Goto, M., Muraoka, Y.: Real-time beat tracking for drumless audio signals: Chord change detection for musical decisions. Speech Communication 27, 311–335 (1998)

    CrossRef  Google Scholar 

  13. Hermansky, H.: Perceptual linear predictive (PLP) analysis of speech. J. Acoust. Soc. Am. 87(4), 1738–1752 (1990)

    CrossRef  Google Scholar 

  14. Laroche, J.: Efficient tempo and beat tracking in audio recordings. J. Audio Eng. Soc. 51(4), 226–233 (2003)

    Google Scholar 

  15. Jensen, K., Andersen, T.H.: Real-time beat estimation using feature extraction. In: Proceedings of the Computer Music Modeling and Retrieval Symposium, Lecture Notes in Computer Science, pp. 13–22. Springer, Heidelberg (2003)

    Google Scholar 

  16. Lindeberg, T.: Edge detection and ridge detection with automatic scale selection. CVAP Report, KTH, Stockholm (1996)

    Google Scholar 

  17. Masri, P., Bateman, A.: Improved modelling of attack transient in music analysisresynthesis. In: Proceedings of the International Computer Music Conference, HongKong, pp. 100–104 (1996)

    Google Scholar 

  18. McAdams, S.: Musical similarity and dynamic processing in musical context. In: Proceedings of the ISMA (CD), Mexico City, Mexico (2002)

    Google Scholar 

  19. McAuley, J.D., Ayala, C.: The effect of timbre on melody recognition by familiarity. Meeting of the A.S.A., Cancun, Mexico (abstract) (2002)

    Google Scholar 

  20. Murphy, D.: Pattern play. In: Smaill, A. (ed.) Additional Proceedings of the 2nd International Conference on Music and Artificial Intelligence. On-line tech. Report series of the University of Edinburgh, Division of Informatics, Edinburgh, Scotland, UK (September 2002), http://dream.dai.ed.ac.uk/group/smaill/icmai/06.pdf

  21. Peeters, G.: Deriving musical structures from signal analysis for music audio summary generation: sequence and state approach. In: Wiil, U.K. (ed.) CMMR 2003. LNCS, vol. 2771, pp. 143–166. Springer, Heidelberg (2004)

    CrossRef  Google Scholar 

  22. Rauber, A., Pampalk, E., Merkl, D.: Using Psycho-Acoustic Models and Self- Organizing Maps to Create a Hierarchical Structuring of Music by Musical Styles. In: Proceedings of the ISMIR, Paris, France, October 13-17, pp. 71–80 (2002)

    Google Scholar 

  23. Scheirer, E.: Tempo and Beat Analysis of Acoustic Musical Signals. Journal of the Acoustical Society of America 103(1), 588–601 (1998)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jensen, K. (2005). A Causal Rhythm Grouping. In: Wiil, U.K. (eds) Computer Music Modeling and Retrieval. CMMR 2004. Lecture Notes in Computer Science, vol 3310. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-31807-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-31807-1_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-24458-5

  • Online ISBN: 978-3-540-31807-1

  • eBook Packages: Computer ScienceComputer Science (R0)