Advertisement

Application of Gaussian Mixture Models for Blind Separation of Independent Sources

  • Koby Todros
  • Joseph Tabrikian
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3195)

Abstract

In this paper, we consider the problem of blind separation of an instantaneous mixture of independent sources by exploiting their non-stationarity and/or non-Gaussianity. We show that non-stationarity and non-Gaussianity can be exploited by modeling the distribution of the sources using Gaussian Mixture Model (GMM). The Maximum Likelihood (ML) estimator is utilized in order to derive a new source separation technique. The method is based on estimation of the it sensors distribution parameters via the Expectation Maximization (EM) algorithm for GMM parameter estimation. The separation matrix is estimated by applying simultaneous joint diagonalization of the estimated GMM covariance matrices. The performance of the proposed method is evaluated and compared to existing blind source separation methods. The results show superior performance.

Keywords

Gaussian Mixture Model Independent Component Analysis Independent Source Source Distribution Separation Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cardoso, J.-F.: Blind Signal Seperation: Statistical Principles. Proceedings of the IEEE 86(10), 2009–2025 (1998)CrossRefGoogle Scholar
  2. 2.
    Pham, D.-T., Cardoso, J.-F.: Blind Seperation of instantaneous mixtures of nonstationary sources. IEEE Trans. on Signal Processing 49(9), 1837–1848 (2001)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Cardoso, J.-F.: High-Order Contrasts for Independent Component Analysis. Neural Computation 11, 157–192 (1999)CrossRefGoogle Scholar
  4. 4.
    Hyvarinen, A.: Blind Source Seperation by Nonstationarity of Variance: A Cumulant-Based Approach. IEEE Trans. on Neural Networks 12(6), 1471–1474 (2001)CrossRefGoogle Scholar
  5. 5.
    Attaias, H.: Independent factor analysis. Neural Computation 11, 803–851 (1999)CrossRefGoogle Scholar
  6. 6.
    Hyvarnien, A., Oja, E.: A fast-fixed-point algorithm for independent component analysis. Neural Computation 9, 1483–1492 (1997)CrossRefGoogle Scholar
  7. 7.
    Dempster, A.-P., Laird, N.-M., Rubin, D.-B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society 39 B, 1–38 (1977)MathSciNetGoogle Scholar
  8. 8.
    Bilmes, J.: A Gentle Tutorial on the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models, Technical Report, University of Berkely, ICSI-TR-97-021 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Koby Todros
    • 1
  • Joseph Tabrikian
    • 1
  1. 1.Department of ECEBen-Gurion University of the NegevBeer-ShevaIsrael

Personalised recommendations