Closed-Form EM for Sparse Coding and Its Application to Source Separation

  • Jörg Lücke
  • Abdul-Saboor Sheikh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7191)

Abstract

We define and discuss a novel sparse coding algorithm based on closed-form EM updates and continuous latent variables. The underlying generative model consists of a standard ‘spike-and-slab’ prior and a Gaussian noise model. Closed-form solutions for E- and M-step equations are derived by generalizing probabilistic PCA. The resulting EM algorithm can take all modes of a potentially multimodal posterior into account. The computational cost of the algorithm scales exponentially with the number of hidden dimensions. However, with current computational resources, it is still possible to efficiently learn model parameters for medium-scale problems. Thus, the algorithm can be applied to the typical range of source separation tasks. In numerical experiments on artificial data we verify likelihood maximization and show that the derived algorithm recovers the sparse directions of standard sparse coding distributions. On source separation benchmarks comprised of realistic data we show that the algorithm is competitive with other recent methods.

Keywords

Independent Component Analysis Expectation Maximization Independent Component Analysis Expectation Maximization Algorithm Source Separation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amari, S., Cichocki, A., Yang, H.H.: A new learning algorithm for blind signal separation. In: NIPS, pp. 757–763. MIT Press (1996)Google Scholar
  2. 2.
    Bishop, C.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)MATHGoogle Scholar
  3. 3.
    Cichocki, A., Amari, S., Siwek, K., Tanaka, T., Phan, A., Zdunek, R.: ICALAB-MATLAB Toolbox Version 3 (2007)Google Scholar
  4. 4.
    Dayan, P., Abbott, L.F.: Theoretical Neuroscience. MIT Press, Cambridge (2001)MATHGoogle Scholar
  5. 5.
    Jaakkola, T.: Tutorial on variational approximation methods. In: Opper, M., Saad, D. (eds.) Advanced Mean Field Methods: Theory and Practice. MIT Press (2000)Google Scholar
  6. 6.
    Jordan, M., Ghahramani, Z., Jaakkola, T., Saul, L.: An introduction to variational methods for graphical models. Machine Learning 37, 183–233 (1999)CrossRefMATHGoogle Scholar
  7. 7.
    Knowles, D., Ghahramani, Z.: Infinite Sparse Factor Analysis and Infinite Independent Components Analysis. In: Davies, M.E., James, C.J., Abdallah, S.A., Plumbley, M.D. (eds.) ICA 2007. LNCS, vol. 4666, pp. 381–388. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  8. 8.
    Knowles, D., Ghahramani, Z.: Nonparametric Bayesian sparse factor models with application to gene expression modelling. CoRR, abs/1011.6293 (2010)Google Scholar
  9. 9.
    Lee, H., Battle, A., Raina, R., Ng, A.: Efficient sparse coding algorithms. In: NIPS, vol. 22, pp. 801–808 (2007)Google Scholar
  10. 10.
    Lücke, J., Sheikh, A.-S.: Closed-form EM for sparse coding and its application to source separation. arXiv:1105.2493v1 [stat.ML]Google Scholar
  11. 11.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: ICML, p. 87 (2009)Google Scholar
  12. 12.
    Mitchell, T.J., Beauchamp, J.J.: Bayesian variable selection in linear regression. Journal of the American Statistical Association 83(404), 1023–1032 (1988)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Mohamed, S., Heller, K., Ghahramani, Z.: Sparse exponential family latent variable models. In: NIPS Workshop (2010)Google Scholar
  14. 14.
    Neal, R., Hinton, G.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan, M.I. (ed.) Learning in Graphical Models. Kluwer (1998)Google Scholar
  15. 15.
    Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)CrossRefGoogle Scholar
  16. 16.
    Paisley, J.W., Carin, L.: Nonparametric factor analysis with beta process priors. In: ICML, p. 98 (2009)Google Scholar
  17. 17.
    Seeger, M.: Bayesian inference and optimal design for the sparse linear model. JMLR 9, 759–813 (2008)MathSciNetMATHGoogle Scholar
  18. 18.
    Suzuki, T., Sugiyama, M.: Least-squares independent component analysis. Neural Computation 23(1), 284–301 (2011)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Teh, Y.W., Görür, D., Ghahramani, Z.: Stick-breaking construction for the indian buffet process. Journal of Machine Learning Research - Proceedings Track 2, 556–563 (2007)Google Scholar
  20. 20.
    West, M.: Bayesian factor regression models in the ”large p, small n” paradigm. In: Bayesian Statistics, vol. 7, pp. 723–732. Oxford University Press (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jörg Lücke
    • 1
  • Abdul-Saboor Sheikh
    • 1
  1. 1.FIASGoethe-University FrankfurtFrankfurtGermany

Personalised recommendations