Blind Separation of Noisy Image Mixtures

  • Lars Kai Hansen
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

Reconstruction of statistically independent source signals from linear mixtures is relevant to many signal processing contexts [1,3,6,11,22]. Considered a generalization of principal component analysis, the problem is often referred to as independent component analysis (ICA) [9].

Keywords

Covariance Assure Autocorrelation Reso Deconvolution 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. Amari, A. Cichocki, and H. Yang. A new learning algorithm for blind signal separation. In D. Touretzky, M. Mozer, and M. Hasselmo, editors, Advances in Neural Information Processing Systems, 8, 757–763, MIT Press, Cambridge MA, 1996.Google Scholar
  2. 2.
    H. Attias and C.E. Schreiner. Blind source separation and deconvolution by dynamic component analysis. Neural Networks for Signal Processing VII: Proceedings of the 1997 IEEE Workshop, 1997.Google Scholar
  3. 3.
    A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6): 1129–1159, 1995.CrossRefGoogle Scholar
  4. 4.
    A. Belouchrani and J.-F. Cardoso. Maximum likelihood source separation by the expectation-maximization technique: deterministic and stochastic implementation. In Proc. NOLTA, 49–53, 1995.Google Scholar
  5. 5.
    S. Geman, E. Bienenstock, and R. Doursat. Neural Networks and the Bias/Variance Dilemma, Neural Computation, 4: 1–58, 1992.CrossRefGoogle Scholar
  6. 6.
    P. Comon. Independent component analysis — a new concept? Signal Processing, 36:287–314, 1994.MATHCrossRefGoogle Scholar
  7. 7.
    L.K. Hansen and J. Larsen. Unsupervised learning and generalization. Proceedings of the IEEE International Conference on Neural Networks. 1: 25–30, 1996.Google Scholar
  8. 8.
    L.K. Hansen, L. Nonboe Andersen, U. Kjems, J. Larsen. Revisiting Boltzmann learning: parameter estimation in Markov random fields. Proceedings of International Conference on Acoustics Speech and Signal Processing. 6: 3395–3398, 1996.Google Scholar
  9. 9.
    L.K. Hansen, J. Larsen, F.Å. Nielsen, S.C. Strother, E. Rostrup, R. Savoy, N. Lange, J.J. Sidtis, C. Svarer, O.B. Paulson. Generalizable patterns in neuroimaging: how many principal components? Neurolmage. 9: 534–544, 1999.CrossRefGoogle Scholar
  10. 10.
    G. E. Hinton and T. J. Sejnowski. Learning and releaming in Boltzmann machines In D.E. Rumelhart and J.L. McClelland, Eds. Parallel Distributed Processing: Explorations in the Micro structure of Cognition. 1: 282, MIT Press, Cambridge, 1986.Google Scholar
  11. 11.
    C. Jutten and J. Herault. Blind separation of sources: An adaptive algorithm based on neuromimetic architecture. Signal Processing. 24: 1–10, 1991.MATHCrossRefGoogle Scholar
  12. 12.
    B. Lautrup, L.K. Hansen I. Law, N. Mørch, C. Svarer, S.C. Strother. Massive weight sharing: A cure for extremely ill-posed problems. H.J. Hermanet al., eds. Supercomputing in Brain Research: From Tomography to Neural Networks. 137–148, 1995.Google Scholar
  13. 13.
    T.-W. Lee. Independent component analysis: theory and applications, Kluwer Academic Publishers, 1998.Google Scholar
  14. 14.
    M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations Neural Computation. 12: 2, 2000.CrossRefGoogle Scholar
  15. 15.
    D. MacKay. Maximum likelihood and covariant algorithms for independent components analysis. “Draft 3.7”, 1996.Google Scholar
  16. 16.
    M. J. McKeown, T. P. Jung, S. Makeig, G. Brown, S. S. Kindermann, T.-W. Lee and T. J. Sejnowski. Spatially independent activity patterns in functional magnetic resonance imaging data during the stroop color-naming task. Proceedings of the National Academy of Sciences USA, 95: 803–810, 1998.CrossRefGoogle Scholar
  17. 17.
    J. R. Moeller and S. C. Strother. A regional covariance approach to the analysis of functional patterns in positron emission tomographic data. J. Cereb. Blood Flow Metab. 11: A121–A135 (1991).CrossRefGoogle Scholar
  18. 18.
    L. Molgedey & H. Schuster. Separation of independent signals using time-delayed correlations. Physical Review Letters. 72: 3634–3637, 1994.CrossRefGoogle Scholar
  19. 19.
    E. Moulines, J.-F. Cardoso, E. Gassiat. Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models. Proc. ICASSP, Munich, 5: 3617–3620, 1997.Google Scholar
  20. 20.
    N. Mørch, U. Kjems, L.K. Hansen, C. Svarer, I. Law, B. Lautrup, S.C. Strother, and K. Rehm. Visualization of neural networks using saliency maps. Proceedings of 1995 IEEE International Conference on Neural Networks, 2085–2090, 1995.Google Scholar
  21. 21.
    N. Mørch, L.K. Hansen, S.C. Strother, C. Svarer, D.A. Rottenberg, B. Lautrup, R. Savoy, O.B. Paulson. Nonlinear versus linear models in functional neuroimaging: learning curves and generalization crossover. Proceedings of the 15th International Conference on Information Processing in Medical Imaging. 1230: 259–270, 1997.Google Scholar
  22. 22.
    E. Oja. PCA, ICA, and nonlinear Hebbian learning. Proc.Int. Conf. on Artificial Neural Networks, 89–94, 1995.Google Scholar
  23. 23.
    B.A. Olshausen. Learning linear, sparse, factorial codes. A.I. Memo 1580, Massachusetts Institute of Technology, 1996.Google Scholar
  24. 24.
    B. A. Pearlmutter and L. C. Parra. A context-sensitive generalization of ICA. Proc. International Conference on Neural Information Processing, 1996.Google Scholar
  25. 25.
    C. Peterson & J.R. Anderson. A mean field theory learning algorithm for neural networks. Complex Systems. 1: 995–1019, 1987.MATHGoogle Scholar

Copyright information

© Springer-Verlag London 2000

Authors and Affiliations

  • Lars Kai Hansen

There are no affiliations available

Personalised recommendations