Towards Sparsity and Selectivity: Bayesian Learning of Restricted Boltzmann Machine for Early Visual Features

  • Hanchen Xiong
  • Sandor Szedmak
  • Antonio Rodríguez-Sánchez
  • Justus Piater
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8681)


This paper exploits how Bayesian learning of restricted Boltzmann machine (RBM) can discover more biologically-resembled early visual features. The study is mainly motivated by the sparsity and selectivity of visual neurons’ activations in V1 area. Most previous work of computational modeling emphasize selectivity and sparsity independently, which neglects the underlying connections between them. In this paper, a prior on parameters is defined to simultaneously enhance these two properties, and a Bayesian learning framework of RBM is introduced to infer the maximum posterior of the parameters. The proposed prior performs as the lateral inhibition between neurons. According to our empirical results, the visual features learned from the proposed Bayesian framework yield better discriminative and generalization capability than the ones learned with maximum likelihood, or other state-of-the-art training strategies.


Markov Chain Monte Carlo Natural Image Sparse Code Contrastive Divergence Bayesian Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barlow, H.B.: Unsupervised learning. Neural Computation 1, 295–311 (1989)CrossRefGoogle Scholar
  2. 2.
    Berkes, P., White, B., Fiser, J.: No evidence for active sparsification in the visual cortex. In: NIPS, pp. 108–116 (2009)Google Scholar
  3. 3.
    Earl, D.J., Deem, M.W.: Parallel tempering: Theory, applications, and new perspectives. Phys. Chem. Chem. Phys. 7, 3910–3916 (2005)CrossRefGoogle Scholar
  4. 4.
    Franco, L., Rolls, E.T., Aggelopoulos, N.C., Jerez, J.M.: Neuronal selectivity, population sparseness, and ergodicity in the inferior temporal visual cortex. Biological Cybernetics 96(6), 547–560 (2007)CrossRefzbMATHGoogle Scholar
  5. 5.
    Goh, H., Thome, N., Cord, M.: Biasing restricted Boltzmann machines to manipulate latent selectivity and sparsity. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2010)Google Scholar
  6. 6.
    van Hateren, J.H., van der Schaaf, A.: Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society B: Biological Sciences 265(1394), 359–366 (1998)CrossRefGoogle Scholar
  7. 7.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area V2. In: NIPS (2007)Google Scholar
  9. 9.
    Luo, H., Shen, R., Niu, C., Ullrich, C.: Sparse group restricted boltzmann machines. In: AAAI (2011)Google Scholar
  10. 10.
    Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)CrossRefGoogle Scholar
  11. 11.
    Salakhutdinov, R., Murray, I.: On the quantitative analysis of deep belief networks. In: ICML (2008)Google Scholar
  12. 12.
    Willmore, B., Tolhurst, D.: Characterising the sparseness of neural codes. Network: Comput. Neural Syst. 12, 255–270 (2001)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Hanchen Xiong
    • 1
  • Sandor Szedmak
    • 1
  • Antonio Rodríguez-Sánchez
    • 1
  • Justus Piater
    • 1
  1. 1.Institute of Computer ScienceUniversity of InnsbruckAustria

Personalised recommendations