Advertisement

Knowledge Based Fundamental and Harmonic Frequency Detection in Polyphonic Music Analysis

  • Xiaoquan Li
  • Yijun Yan
  • Jinchang Ren
  • Huimin Zhao
  • Sophia Zhao
  • John Soraghan
  • Tariq Durrani
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 463)

Abstract

In this paper, we present an efficient approach to detect and tracking the fundamental frequency (F0) from ‘wav’ audio. In general, music F0 and harmonic frequency show the multiple relations; therefore frequency domain analysis can be used to track the F0. The model includes the harmonic frequency probability analysis method and useful pre-post processing for multiple instruments. Thus, the proposed system can efficiently transcribe polyphonic music, while taking into account the probability of F0 and harmonic frequency. The experimental results demonstrate that the proposed system can successful transcribe polyphonic music, achieved the quite advanced level.

Keywords

Automatic music transcription Multiple pitch estimation Polyphonic music segmentation Fundamental frequency detection 

Notes

Acknowledgement

This work was supported by the National Natural Science Foundation of China (61672008), Guangdong Provincial Application-oriented Technical Research and Development Special fund project (2016B010127006, 2015B010131017), the Natural Science Foundation of Guangdong Province (2016A030311013, 2015A030313672), and International Scientific and Technological Cooperation Projects of Education Department of Guangdong Province (2015KGJHZ021).

References

  1. 1.
    Bay, M., Ehmann, A.F., Downie, J.S.: Evaluation of multiple-F0 estimation and tracking systems. In: ISMIR, pp. 315–320, October 2009Google Scholar
  2. 2.
    Arora, V., Behera, L.: Multiple F0 estimation and source clustering of polyphonic music audio using PLCA and HMRFs. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 23(2), 278–287 (2015)Google Scholar
  3. 3.
    Cogliati, A., Duan, Z., Wohlberg, B.: Piano transcription with convolutional sparse lateral inhibition. IEEE Sig. Process. Lett. 24(4), 392–396 (2017)Google Scholar
  4. 4.
    Su, L., Yang, Y.H.: Combining spectral and temporal representations for multipitch estimation of polyphonic music. IEEE/ACM TASLP 23(10), 1600–1612 (2015)Google Scholar
  5. 5.
    Schörkhuber, C., Klapuri, A.: Constant-Q transform toolbox for music processing. In: 7th Sound and Music Computing Conference, Barcelona, Spain, pp. 3–64, July 2010Google Scholar
  6. 6.
    Benetos, E., Cherla, S., Weyde, T.: An efficient shift-invariant model for polyphonic music transcription. In: 6th International Workshop on Machine Learning and Music (2013)Google Scholar
  7. 7.
    Smaragdis, P., Raj, B., Shashanka, M.: A probabilistic latent variable model for acoustic modeling. In: Advances in Models for Acoustic Processing, NIPS, vol. 148, p. 8-1 (2006)Google Scholar
  8. 8.
    Benetos, E., Dixon, S.: Multiple-instrument polyphonic music transcription using a temporally constrained shift-invariant model. J. Acoust. Soc. Am. 133(3), 1727–1741 (2013)Google Scholar
  9. 9.
    Duan, Z., Pardo, B., Zhang, C.: Multiple fundamental frequency estimation by modeling spectral peaks and non-peak regions. IEEE Trans. Audio Speech Lang. Process. 18(8), 2121–2133 (2010)Google Scholar
  10. 10.
    Emiya, V., Bertin, N., David, B., Badeau, R.: MAPS-A piano database for multipitch estimation and automatic transcription of music (2010)Google Scholar
  11. 11.
    Goto, M., Hashiguchi, H., Nishimura, T., Oka, R.: RWC music database: popular, classical and jazz music databases. In: ISMIR, vol. 2, pp. 287–288, October 2002Google Scholar
  12. 12.
    Grosche, P., Muller, M.: Extracting predominant local pulse information from music recordings. IEEE Trans. Audio Speech Lang. Process. 19(6), 1688–1701 (2011)Google Scholar
  13. 13.
    Müller, M., Ewert, S.: Chroma toolbox: MATLAB implementations for extracting variants of chroma-based audio features. In: Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR) (2011)Google Scholar
  14. 14.
    Ren, J., Vlachos, T.: Efficient detection of temporally impulsive dirt impairments in archived films. Sig. Process. 87(3), 541–551 (2007)Google Scholar
  15. 15.
    Ren, J., Jiang, J., et al.: Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection. IET Image Process. 4(4), 294–301 (2010)Google Scholar
  16. 16.
    Jiang, J., et al.: LIVE: an integrated production and feedback system for intelligent and interactive broadcasting. IEEE Trans. Broadcast. 57(3), 646–661 (2011)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Xiaoquan Li
    • 1
  • Yijun Yan
    • 1
  • Jinchang Ren
    • 1
  • Huimin Zhao
    • 2
    • 3
  • Sophia Zhao
    • 1
  • John Soraghan
    • 1
  • Tariq Durrani
    • 1
  1. 1.Department of Electronic and Electrical EngineeringUniversity of StrathclydeGlasgowUK
  2. 2.School of Computer ScienceGuangdong Polytechnic Normal UniversityGuangzhouChina
  3. 3.The Guangzhou Key Laboratory of Digital Content Processing and Security TechnologiesGuangzhouChina

Personalised recommendations