Hierarchical K-Means Algorithm for Modeling Visual Area V2 Neurons

  • Xiaolin Hu
  • Peng Qi
  • Bo Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7665)

Abstract

Computational studies about the properties of the receptive fields of neurons in the cortical visual pathway of mammals are abundant in the literature but most addressed neurons in the primary visual area (V1). Recently, the sparse deep belief network (DBN) was proposed to model the response properties of neurons in the V2 area. By investigating the factors that contribute to the success of the model, we find that a simple algorithm for data clustering, K-means algorithm can be stacked into a hierarchy to reproduce these properties of V2 neurons, too. In addition, it is computationally much more efficient than the sparse DBN.

Keywords

Neural network Deep learning Visual area V1 V2 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hubel, D.H., Wiesel, T.N.: Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat. Journal of Neurophysiology 28, 229–289 (1965)Google Scholar
  2. 2.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)CrossRefGoogle Scholar
  3. 3.
    Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research 37(23), 3311–3325 (1997)CrossRefGoogle Scholar
  4. 4.
    Bell, A.J., Sejnowski, T.J.: The “independent components” of natural scenes are edge filters. Vision Research 37(23), 3327–3338 (1997)CrossRefGoogle Scholar
  5. 5.
    Lee, H., Ekanadham, C., Ng, A.: Sparse deep belief net model for visual area V2. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, vol. 20 (2007)Google Scholar
  6. 6.
    Ranzato, M., Boureau, Y.L., LeCun, Y.: Sparse feature learning for deep belief networks. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems (NIPS), Vancouver, Canada, vol. 20 (2007)Google Scholar
  7. 7.
    Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Ft. Lauderdale, FL (2011)Google Scholar
  8. 8.
    Saxe, A.M., Bhand, M., Mudur, R., Suresh, B., Ng, A.Y.: Unsupervised learning models of primary cortical receptive fields and receptive field plasticity. In: Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K. (eds.) Advances in Neural Information Processing Systems (NIPS), vol. 24, pp. 1971–1979 (2011)Google Scholar
  9. 9.
    Karklin, Y., Lewicki, M.S.: A hierarchical bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Computation 17, 397–423 (2005)MATHCrossRefGoogle Scholar
  10. 10.
    Karklin, Y., Lweicki, M.S.: Emergence of complex cell properties by learning to generalize in natural scenes. Nature 457, 83–86 (2009)CrossRefGoogle Scholar
  11. 11.
    Riesenhuber, M., Poggio, T.: Hierarchical models of object recognition in cortex. Nature Neuroscience 2, 1019–1025 (1999)CrossRefGoogle Scholar
  12. 12.
    Cadieu, C., Kouh, M., Pasupathy, A., Connor, C.E., Riesenhuber, M., Poggio, T.: A model of V4 shape selectivity and invariance. Journal of Neurophysiology 98, 1733–1750 (2007)CrossRefGoogle Scholar
  13. 13.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006)MathSciNetMATHCrossRefGoogle Scholar
  14. 14.
    Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Computation 14, 1771–1800 (2002)MATHCrossRefGoogle Scholar
  15. 15.
    Ito, M., Komatsu, H.: Representation of angles embedded within contour stimuli in area V2 of macaque monkeys. The Journal of Neuroscience 24(13), 3313–3324 (2004)CrossRefGoogle Scholar
  16. 16.
    Ekanadham, C.: Sparse deep belief net models for visual area V2. Undergraduate honors thesis, Stanford University (2007)Google Scholar
  17. 17.
    Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th International Conference on Machine Learning (ICML), Montreal, Canada, pp. 609–616 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Xiaolin Hu
    • 1
  • Peng Qi
    • 1
  • Bo Zhang
    • 1
  1. 1.State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), and Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations