Ternary Sparse Coding

  • Georgios Exarchakis
  • Marc Henniges
  • Julian Eggert
  • Jörg Lücke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7191)

Abstract

We study a novel sparse coding model with discrete and symmetric prior distribution. Instead of using continuous latent variables distributed according to heavy tail distributions, the latent variables of our approach are discrete. In contrast to approaches using binary latents, we use latents with three states (-1, 0, and 1) following a symmetric and zero-mean distribution. While using discrete latents, the model thus maintains important properties of standard sparse coding models and of its recent variants. To efficiently train the parameters of our probabilistic generative model, we apply a truncated variational EM approach (Expectation Truncation). The resulting learning algorithm infers all model parameters including the variance of data noise and data sparsity. In numerical experiments on artificial data, we show that the algorithm efficiently recovers the generating parameters, and we find that the applied variational approach helps in avoiding local optima. Using experiments on natural image patches, we demonstrate large-scale applicability of the approach and study the obtained Gabor-like basis functions.

Keywords

Hide Variable Primary Visual Cortex Sparse Code Data Noise Probabilistic Generative Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)CrossRefGoogle Scholar
  2. 2.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. JMLR 11, 19–60 (2010)MathSciNetMATHGoogle Scholar
  3. 3.
    Aharon, M., Elad, M., Bruckstein, A.: K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing 54(11), 4311–4322 (2006)CrossRefGoogle Scholar
  4. 4.
    Rehn, M., Sommer, F.T.: A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. J. of Comp. Neurosci. 22(2), 135–146 (2007)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Rozell, C.J., Johnson, D.H., Baraniuk, R.G., Olshausen, B.A.: Sparse coding via thresholding and local competition in neural circuits. Neural Computation 20(10), 2526–2563 (2008)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Henniges, M., Puertas, G., Bornschein, J., Eggert, J., Lücke, J.: Binary Sparse Coding. In: Vigneron, V., Zarzoso, V., Moreau, E., Gribonval, R., Vincent, E. (eds.) LVA/ICA 2010. LNCS, vol. 6365, pp. 450–457. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    Lücke, J., Eggert, J.: Expectation truncation and the benefits of preselection in training generative models. JMLR 11, 2855–2900 (2010)MathSciNetMATHGoogle Scholar
  8. 8.
    Ueda, N., Nakano, R.: Deterministic annealing EM algorithm. Neural Networks 11(2), 271–282 (1998)CrossRefGoogle Scholar
  9. 9.
    Hateren, J.H., Schaaf, A.: Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London B 265, 359–366 (1998)CrossRefGoogle Scholar
  10. 10.
    Lücke, J.: Receptive field self-organization in a model of the fine-structure in V1 cortical columns. Neural Computation 21(10), 2805–2845 (2009)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Ringach, D.L.: Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology 88, 455–463 (2002)Google Scholar
  12. 12.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Haft, M., Hofman, R., Tresp, V.: Generative binary codes. Pattern Anal. Appl. 6, 269–284 (2004)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Puertas, J.G., Bornschein, J., Lücke, J.: The maximal causes of natural scenes are edge filters. NIPS 23, 1939–1947 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Georgios Exarchakis
    • 1
  • Marc Henniges
    • 1
  • Julian Eggert
    • 2
  • Jörg Lücke
    • 1
  1. 1.FIASGoethe-Universität Frankfurt am MainGermany
  2. 2.Honda Research Institute EuropeOffenbach am MainGermany

Personalised recommendations