Advertisement

Video-Guided Sound Source Separation

  • Junfeng Zhou
  • Feng Wang
  • Di Guo
  • Huaping LiuEmail author
  • Fuchun Sun
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11740)

Abstract

A major aim of separating sound source is to separate the sound of interest out of mixture, such as the sound of objects on the screen. In this paper we put forward a method incorporating sound-indicated object detection and using the detection result to separate the on screen sounds and the off screen ones. After training, the object detection network could recognize which object is sounding just like human learns what object making what sound. And then using the temporal information of sounds in a video segment, we separate out sound of the object that is not shown in the video. At last, experiments are carried out in data from AudioSet and we demonstrate that the method works well in given scenarios.

Keywords

Sound-source separation Video-guided 

References

  1. 1.
    Zhao, H., Gan, C., Rouditchenko, A., Vondrick, C., McDermott, J., Torralba, A.: The sound of pixels. arXiv preprint arXiv:1804.03160 (2018)
  2. 2.
    Owens, A., Efros, A.A.: Audio-visual scene analysis with self-supervised multisensory features. arXiv preprint arXiv:1804.03641 (2018)
  3. 3.
    Segev, D., Schechner, Y.Y., Elad, M.: Example-based cross-modal denoising. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 486–493. IEEE (2012)Google Scholar
  4. 4.
    Gemmeke, J.F., et al.: Audio set: an ontology and human-labeled dataset for audio events. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 776–780. IEEE (2017)Google Scholar
  5. 5.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  6. 6.
    Aytar, Y., Castrejon, L., Vondrick, C., Pirsiavash, H., Torralba, A.: Cross-modal scene networks. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2303–2314 (2018)CrossRefGoogle Scholar
  7. 7.
    Wang, B., Yang, Y., Xu, X., Hanjalic, A., Shen, H.T.: Adversarial cross-modal retrieval. In: ACM on Multimedia Conference, pp. 154–162 (2017)Google Scholar
  8. 8.
    Hardoon, D.R., Szedmak, S., Shawe-Taylor, J.: Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 16(12), 2639–2664 (2004)CrossRefGoogle Scholar
  9. 9.
    Spiertz, M., Gnann, V.: Source-filter based clustering for monaural blind source separation. In: Proceedings of the 12th International Conference on Digital Audio Effects (2009)Google Scholar
  10. 10.
    Vincent, E., Gribonval, R., Févotte, C.: Performance measurement in blind audio source separation. IEEE Trans. Audio Speech Lang. Process. 14(4), 1462–1469 (2006)CrossRefGoogle Scholar
  11. 11.
    Ozerov, A., Févotte, C., Blouet, R., Durrieu, J.L.: Multichannel nonnegative tensor factorization with structured constraints for user-guided audio source separation. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 257–260. IEEE (2011)Google Scholar
  12. 12.
    Virtanen, T.: Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria. IEEE Trans. Audio Speech Lang. Process. 15(3), 1066–1074 (2007)CrossRefGoogle Scholar
  13. 13.
    Gao, R., Feris, R., Grauman, K.: Learning to separate object sounds by watching unlabeled video. arXiv preprint arXiv:1804.01665 (2018)
  14. 14.
    Parekh, S., Essid, S., Ozerov, A., Duong, N.Q., Pérez, P., Richard, G.: Guiding audio source separation by video object information. In: 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 61–65. IEEE (2017)Google Scholar
  15. 15.
    Hennequin, R., David, B., Badeau, R.: Score informed audio source separation using a parametric model of non-negative spectrogram. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2011)Google Scholar
  16. 16.
    Le Magoarou, L., Ozerov, A., Duong, N.Q.: Text-informed audio source separation. Example-based approach using non-negative matrix partial co-factorization. J. Signal Process. Syst. 79(2), 117–131 (2015)CrossRefGoogle Scholar
  17. 17.
    Duong, N., Ozerov, A., Chevallier, L., Sirot, J.: An interactive audio source separation framework based on non-negative matrix factorization. In: IEEE International Conference on Acoustics Speech and Signal Processing (2014)Google Scholar
  18. 18.
    Barzelay, Z., Schechner, Y.Y.: Harmony in motion. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2007, pp. 1–8. IEEE (2007)Google Scholar
  19. 19.
    Innami, S., Kasai, H.: NMF-based environmental sound source separation using time-variant gain features. Comput. Math. Appl. 64(5), 1333–1342 (2012)CrossRefGoogle Scholar
  20. 20.
    Xu, Y., Du, J., Dai, L.R., Lee, C.H.: A regression approach to speech enhancement based on deep neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. (TASLP) 23(1), 7–19 (2015)CrossRefGoogle Scholar
  21. 21.
    Duong, T.T.H., Nguyen, P.C., Nguyen, C.Q.: Exploiting nonnegative matrix factorization with mixed group sparsity constraint to separate speech signal from single-channel mixture with unknown ambient noise. EAI Endorsed Trans. Context-Aware Syst. Appl. 4(13), 154342 (2018)CrossRefGoogle Scholar
  22. 22.
    Arons, B.: A review of the cocktail party effect. J. Am. Voice I/O Soc. 12(7), 35–50 (1992)Google Scholar
  23. 23.
    El Badawy, D., Duong, N.Q., Ozerov, A.: On-the-fly audio source separation. In: 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE (2014)Google Scholar
  24. 24.
    Chen, X., Liu, G., Shi, J., Xu, J., Xu, B.: Distilled binary neural network for monaural speech separation. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Junfeng Zhou
    • 1
  • Feng Wang
    • 1
  • Di Guo
    • 1
  • Huaping Liu
    • 1
    Email author
  • Fuchun Sun
    • 1
  1. 1.State Key Laboratory of Intelligent Technology and Systems, TNLIST, Department of Computer Science and TechnologyTsinghua UniversityBeijingPeople’s Republic of China

Personalised recommendations