Signal, Image and Video Processing

, Volume 2, Issue 2, pp 107–127 | Cite as

A variational Bayesian approach to number of sources estimation for multichannel blind deconvolution

Original Paper


Most traditional multichannel blind deconvolution algorithms rely on some assumptions on the mixing model, e.g. the number of sources is known a priori; and the mixing environment is noise-free. Unfortunately, these assumptions are not necessarily true in practice. In this paper, we will relax the assumption placed on the number of sources by studying a state space mixing model where the number of sources is assumed to be unknown but not greater than the number of sensors. Based on this mixing model, we will formulate the estimation of the number of sources problem as a model order selection problem. Model comparison, as a common method of model order selection, usually involves the evaluation of multi-variable integrals which is computationally intractable. A variational Bayesian method is therefore used to overcome this multi-variable integral issue. The problem is solved by approximating the true, complicated posteriors with a set of independent, simple, tractable posteriors. To realize the objective of optimal approximation, we maximize an objective function called negative free energy. We will derive a variational Bayesian algorithm, in which the number of sources will be estimated through two approaches: automatic relevance determination and comparison of the optimized negative free energy. The proposed variational Bayesian algorithm will be evaluated on both artificially generated examples, and practical signals.


Multichannel blind deconvolution Independent component analysis Variational Bayes Ensemble learning Model order selection 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cichocki A. and Amari S. (2002). Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications. Wiley, West Sussex Google Scholar
  2. 2.
    Tsoi, A.C., Ma, L.: A balanced approach to multichannel blind deconvolution. IEEE Trans. Circuits Systems I (in Press) (2007)Google Scholar
  3. 3.
    Andrieu C., Barat E. and Doucet A. (2001). Bayesian deconvolution of noisy filtered point processes. IEEE Trans. Signal Process. 49(1): 134–146 CrossRefGoogle Scholar
  4. 4.
    Rajan J.J., Rayner P.J.W. and Godsill S.J. (1997). Bayesian approach to parameter estimation and interpolation of time-varying autoregressive processes using the Gibbs sampler. Vision, Image Signal Process. IEE Proc. 144(2): 249–256 CrossRefGoogle Scholar
  5. 5.
    MacKay D.J.C. (1998). Choice of Basis for laplace approximation. Mach. Learn. 33(1): 77–86 CrossRefMATHGoogle Scholar
  6. 6.
    Roberts S.J. (1998). Independent component analysis: source assessment and separation, a Bayesian approach. Vision, Image Signal Process. IEE Proc. 145(3): 149–154 CrossRefGoogle Scholar
  7. 7.
    Jaakkola, T.: Variational methods for inference and estimation in graphical models. PhD Thesis, MIT (1997)Google Scholar
  8. 8.
    Jordan M.I., Ghahramani Z., Jaakkola T.S. and Saul L.K. (1999). An introduction to variational methods for graphical models. Mach. Learn. 37: 183–233 CrossRefMATHGoogle Scholar
  9. 9.
    MacKay D.J.C. (1995). Probable networks and plausible predictions— a review of practical Bayesian methods for supervised neural networks. netw. comput. Neural Systems 6: 469–505 CrossRefMATHGoogle Scholar
  10. 10.
    Bishop, C.M.: Variational principal components. In: Proceedings of Ninth International Conference on Artificial Neural Networks (ICANN), vol. 1, pp. 509–514 (1999)Google Scholar
  11. 11.
    Attias, H.: Inferring parameters and structure of latent variable models by variational Bayes. In: Proceedings of 15th Conference on Uncertainty in Artificial Intelligence (1999)Google Scholar
  12. 12.
    Choudrey, R.A.: Variational methods for Bayesian independent component analysis. PhD Thesis, University of Oxford (2002)Google Scholar
  13. 13.
    Lappalainen, H.: Ensemble learning for independent component analysis. In: Proceedings of the First International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 7–12 (1999)Google Scholar
  14. 14.
    Lawrence, N.D., Bishop, C.M.: Variational Bayesian independent component analysis. Research Report, Cambridge University (2000)Google Scholar
  15. 15.
    Miskin, J.W.: Ensemble learning for independent component analysis. PhD Thesis, University of Cambridge (2000)Google Scholar
  16. 16.
    Beal, M.J.: Variational algorithms for approximate Bayesian inference. PhD Thesis, University College London (2003)Google Scholar
  17. 17.
    Reichl L.E. (1980). A Modern Course in Statistical Physics. Edward Arnold, London Google Scholar
  18. 18.
    Dempster A.P., Laird N.M. and Rubin D.B. (1977). Maximum Likelihood from incomplete data via the EM algorithm. J. R. Statist. Soci. Series B 39(1): 1–38 MathSciNetMATHGoogle Scholar
  19. 19.
    Bishop C.M. (1995). Neural Networks for Pattern Recognition. Oxford University Press, New York Google Scholar
  20. 20.
    Nandi A.K. (1999). Blind Estimation Using Higher-Order Statistics. Kluwer, Dordrecht Google Scholar
  21. 21.
    Waheed, K., Salam, F.M.: A data-derived quadratic independence measure for adaptive blind source recovery in practical applications. In: 45th IEEE International Midwest Symposium on Circuits and Systems, vol. 3, pp. 473–476 (2002)Google Scholar
  22. 22.
    Moddemeijer R. (1989). On estimation of entropy and mutual information of continuous distributions. Signal Process. 16(3): 233–246 CrossRefMathSciNetGoogle Scholar
  23. 23.
    Zhang, L., Amari, S., Cichocki, A.: Natural gradient approach to blind separation of over- and under-complete mixtures. In: Proceeding of the 1st International Workshop on Independent Component Analysis and Signal Separation, pp. 455–460 (1999)Google Scholar
  24. 24.
    Salam, F.M., Erten, G., Waheed, K.: Blind source recovery: algorithms for static and dynamic environments. In: NNS-IEEE International Joint Conference on Neural Networks, IJCNN (2001)Google Scholar
  25. 25.
    Makeig, S., Bell, A.J., Jung, T.-P., Sejnowski, T.J. : Independent component analysis of electroencephalographic Data. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems (NIPS) vol. 8., pp. 145–151. The MIT Press, (1996)Google Scholar
  26. 26.
    Anemüller J., Sejnowski T.J. and Makeig S. (2003). Complex independent component analysis of frequency-domain electroencephalographic data. Neural Netw. 16: 1311–1323 CrossRefGoogle Scholar
  27. 27.
    Yamazaki, A., Tajima, T., Matsuoka, K.: Convolutive independent component analysis of EEG data. In: SICE Annual Conference in Fukui, Fukui Unive rsity, Japan, pp. 1227–1231 (2003)Google Scholar
  28. 28.
    Dyrholm M., Makeig S. and Hansen L.K. (2007). Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG. Neural Comput. 19(4): 934–955 CrossRefMATHGoogle Scholar
  29. 29.
    Prado R., West M. and Krystal A.D. (2001). Multi-channel EEG analyses via dynamic regression models with time-varying lag/lead structure. J. R. Statist. Soc. Series C Appl. Statist. 50(1): 95–109 CrossRefGoogle Scholar
  30. 30.
    Dyrholm, M., Hansen, L.K., Wang, L., Arendt-Nielsen, L., Chen, A.C.: Convolutive ICA (c-ICA) captures complex spatio-temporal EEG activity. (2007)
  31. 31.
    Zhang L. and Cichocki A. (2000). Blind deconvolution of dynamical systems: a state space approach. J. Signal Process. 4(2): 111–130 Google Scholar

Copyright information

© Springer-Verlag London Limited 2007

Authors and Affiliations

  1. 1.University of IowaIowa CityUSA
  2. 2.Hong Kong Baptist UniversityKowloon TongHong Kong

Personalised recommendations