A Latently Constrained Mixture Model for Audio Source Separation and Localization
We present a method for audio source separation and localization from binaural recordings. The method combines a new generative probabilistic model with time-frequency masking. We suggest that device-dependent relationships between point-source positions and interaural spectral cues may be learnt in order to constrain a mixture model. This allows to capture subtle separation and localization features embedded in the auditory data. We illustrate our method with data composed of two and three mixed speech signals in the presence of reverberations. Using standard evaluation metrics, we compare our method with a recent binaural-based source separation-localization algorithm.
KeywordsSound Source Source Position Room Impulse Response Constrain Mixture Model Sound Intensity Level
Unable to display preview. Download preview PDF.
- 2.Mandel, M.I., Weiss, R.J., Ellis, D.P.W.: Model-based expectation-maximization source separation and localization. IEEE TASLP 18, 382–394 (2010)Google Scholar
- 3.Mouba, J., Marchand, S.: A source localization/separation/respatialization system based on unsupervised classification of interaural cues. In: Proceedings of the International Conference on Digital Audio Effects, pp. 233–238 (2006)Google Scholar
- 6.Vincent, E., Gribonval, R., Févotte, C.: Performance measurement in blind audio source separation. IEEE TASLP 14(4), 1462–1469 (2006)Google Scholar
- 7.Viste, H., Evangelista, G.: On the use of spatial cues to improve binaural source separation. In: Proc. Int. Conf. on Digital Audio Effects, pp. 209–213 (2003)Google Scholar