Advertisement

Monaural Speech Separation by Support Vector Machines: Bridging the Divide Between Supervised and Unsupervised Learning Methods

  • Sepp Hochreiter
  • Michael C. Mozer
Part of the Signals and Communication Technology book series (SCT)

We address the problem of identifying multiple independent speech sources from a single signal that is a mixture of the sources. Because the problem is ill-posed, standard independent component analysis (ICA) approaches which try to invert the mixing matrix fail. We show how the unsupervised problem can be transformed into a supervised regression task which is then solved by supportvector regression (SVR). It turns out that the linear SVR approach is equivalent to the sparse-decomposition method proposed by [1, 2]. However, we can extend the method to nonlinear ICA by applying the “kernel trick.” Beyond the kernel trick, the SVM perspective provides a new interpretation of the sparse-decomposition method’s hyperparameter which is related to the input noise. The limitation of the SVM perspective is that, for the nonlinear case, it can recover only whether or not a mixture component is present; it cannot recover the strength of the component. In experiments, we show that our model can handle difficult problems and is especially well suited for speech signal separation.

Keywords

Support Vector Machine Independent Component Analysis Independent Component Analysis Blind Source Separation Neural Information Processing System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Zibulevsky and B. A. Pearlmutter, “Blind source separation by sparse decomposition,” Neural Computation, vol. 13, no. 4, pp. 863-882, 2001.MATHCrossRefGoogle Scholar
  2. 2.
    B. A. Pearlmutter and A. M. Zador, “Monaural source separation using spectral cues,” in Proc. of the Fifth International Conference on Independent Compo-nent Analysis and Blind Signal Separation, C. G. Puntonet and A. Prieto, Eds. Springer Berlin/Heidelberg, 2004, pp. 478-485.Google Scholar
  3. 3.
    A. Cichocki, R. Unbehauen, L. Moszczynski, and E. Rummert, “A new on-line adaptive algorithm for blind separation of source signals,” in Proc. Int. Symposium on Artificial Neural Networks, ISANN-94, 1994, pp. 406-411.Google Scholar
  4. 4.
    A. Hyvärinen, “Survey on independent component analysis,” Neural Computing Surveys, vol. 2, pp. 94-128, 1999.Google Scholar
  5. 5.
    C. Jutten and J. Herault, “Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture,” Signal Processing, vol. 24, no. 1, pp. 1-10, 1991.MATHCrossRefGoogle Scholar
  6. 6.
    A. J. Bell and T. J. Sejnowski, “An information-maximization approach to blind separation and blind deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129-1159, 1995.CrossRefGoogle Scholar
  7. 7.
    B. A. Pearlmutter and L. C. Parra, “Maximum likelihood blind source separa-tion: A context-sensitive generalization of ICA,” in Advances in Neural Infor-mation Processing Systems 9, M. C. Mozer, M. I. Jordan, and T. Petsche, Eds. MIT Press, Cambridge, MA, 1997, pp. 613-619.Google Scholar
  8. 8.
    H. Attias and C. E. Schreiner, “Blind source separation and deconvolution: The dynamic component analysis algorithm,” Neural Computation, vol. 10, no. 6, pp. 1373-1424, 1998.CrossRefGoogle Scholar
  9. 9.
    S. Amari, A. Cichocki, and H. Yang, “A new learning algorithm for blind signal separation,” in Advances in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds. MIT Press, Cambridge, MA, 1996, pp. 757-763.Google Scholar
  10. 10.
    P. Comon, “Independent component analysis - a new concept?” Signal Process-ing, vol. 36, no. 3, pp. 287-314, 1994.MATHCrossRefGoogle Scholar
  11. 11.
    J.-F. Cardoso and A. Souloumiac, “Blind beamforming for non Gaussian sig-nals,” IEE Proceedings-F, vol. 140, no. 6, pp. 362-370, 1993.Google Scholar
  12. 12.
    T. Tanaka, “Analysis of bit error probability of direct-sequence CDMA multi-user demodulators,” in Advances in Neural Information Processing Systems 13, T. K. Leen, T. G. Dietterich, and V. Tresp, Eds. MIT Press, Cambridge, MA, 2001, pp. 315-321.Google Scholar
  13. 13.
    G. Cauwenberghs, “Monaural separation of independent acoustical compo-nents,” in Proceedings of the 1999 IEEE International Symposium on Circuits and Systems (ISCAS’99), vol. 5. IEEE, 1999, pp. 62-65.Google Scholar
  14. 14.
    T.-W. Lee, M. S. Lewicki, M. Girolami, and T. J. Sejnowski, “Blind source separation of more sources than mixtures using overcomplete representations,” IEEE Signal Processing Letters, 1998.Google Scholar
  15. 15.
    S. T. Roweis, “One microphone source separation,” in Advances in Neural In-formation Processing Systems 13, T. K. Leen, T. G. Dietterich, and V. Tresp, Eds. MIT Press, Cambridge, MA, 2001, pp. 793-799.Google Scholar
  16. 16.
    M. S. Lewicki and T. J. Sejnowski, “Learning overcomplete representations,” Neural Computation, vol. 12, no. 2, pp. 337-365, 2000.CrossRefGoogle Scholar
  17. 17.
    ——, “Learning nonlinear overcomplete representations for efficient coding,” in Advances in Neural Information Processing Systems 10, M. I. Jordan, M. J. Kearns, and S. A. Solla, Eds. MIT Press, Cambridge, MA, 1998, pp. 556-562.Google Scholar
  18. 18.
    V. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag, New York, 1995.MATHGoogle Scholar
  19. 19.
    .C. Cortes and V. N. Vapnik, “Support vector networks,” Machine Learning, vol. 20, pp. 273-297, 1995.Google Scholar
  20. 20.
    B. Schölkopf and A. J. Smola, Learning with kernels - Support Vector Machines, Reglarization, Optimization, and Beyond. MIT Press, Cambridge, 2002.Google Scholar
  21. 21.
    B. Schölkopf, P. L. Bartlett, A. J. Smola, and R. Williamson, “Support vector regression with automatic accuracy control,” in Proceedings of ICANN’98, ser. Perspectives in Neural Computing, L. Niklasson, M. Bodén, and T. Ziemke, Eds. Berlin: Springer Verlag, 1998, pp. 111-116.Google Scholar
  22. 22.
    ——, “Shrinking the tube: a new support vector regression algorithm,” in Advances in Neural Information Processing Systems 11, M. S. Kearns, S. A. Solla, and D. A. Cohn, Eds. Cambridge, MA: MIT Press, 1999, pp. 330-336.Google Scholar
  23. 23.
    A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statistics and Computing, vol. 14, pp. 199-222, 2004. Also: NeuroCOLT Tech-nical Report NC-TR-98-030.Google Scholar
  24. 24.
    R. Vollgraf, M. Scholz, I. Meinertzhagen, and K. Obermayer, “Nonlinear filter-ing of electron micrographs by means of support vector regression,” in Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, Massa-chusetts, 2004, pp. 717-724.Google Scholar
  25. 25.
    X. Wang, A. Li, Z. Jiang, and H. Feng, “Missing value estimation for DNA microarray gene expression data by support vector regression imputation and orthogonal coding scheme,” BMC Bioinformatics, vol. 7, p. 32, 2006.CrossRefGoogle Scholar

Copyright information

© Springer 2007

Authors and Affiliations

  • Sepp Hochreiter
    • 1
  • Michael C. Mozer
    • 2
  1. 1.Institute of BioinformaticsJohannes Kepler UniversityAustria
  2. 2.Department of Computer ScienceUniversity of ColoradoBoulderUSA

Personalised recommendations