Exploring Music Contents

Volume 6684 of the series Lecture Notes in Computer Science pp 138-162

Speech/Music Discrimination in Audio Podcast Using Structural Segmentation and Timbre Recognition

  • Mathieu BarthetAffiliated withCentre for Digital Music, Queen Mary University of London
  • , Steven HargreavesAffiliated withCentre for Digital Music, Queen Mary University of London
  • , Mark SandlerAffiliated withCentre for Digital Music, Queen Mary University of London

* Final gross prices may vary according to local VAT.

Get Access


We propose two speech/music discrimination methods using timbre models and measure their performances on a 3 hour long database of radio podcasts from the BBC. In the first method, the machine estimated classifications obtained with an automatic timbre recognition (ATR) model are post-processed using median filtering. The classification system (LSF/K-means) was trained using two different taxonomic levels, a high-level one (speech, music), and a lower-level one (male and female speech, classical, jazz, rock & pop). The second method combines automatic structural segmentation and timbre recognition (ASS/ATR). The ASS evaluates the similarity between feature distributions (MFCC, RMS) using HMM and soft K-means algorithms. Both methods were evaluated at a semantic (relative correct overlap RCO), and temporal (boundary retrieval F-measure) levels. The ASS/ATR method obtained the best results (average RCO of 94.5% and boundary F-measure of 50.1%). These performances were favourably compared with that obtained by a SVM-based technique providing a good benchmark of the state of the art.


Speech/Music Discrimination Audio Podcast Timbre Recognition Structural Segmentation Line Spectral Frequencies K-means clustering Mel-Frequency Cepstral Coefficients Hidden Markov Models