Advertisement

Feature Extraction Based on Generating Bayesian Network

  • Kaneharu NishinoEmail author
  • Mary Inaba
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9492)

Abstract

Networks used in Deep Learning generally have feedforward architectures, and they can not use top-down information for recognition. In this paper, we propose Bayesian AutoEncoder (BAE) in order to use top-down information for recognition. BAE constructs a generative model represented as a Bayesian Network, and the networks constructed by BAE behave as Bayesian Networks. The network can execute inference for each stochastic variable through belief propagation, using both bottom-up information and top-down information. We confirmed that BAE can construct small networks with one latent layer and extract features in 3\(\,\times \,\)3 pixel input data as latent variables.

Keywords

Bayesian Network Child Node Belief Propagation Parent Node Variable Node 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Hinton, G.: Boltzmann machines. ScholarpediaGoogle Scholar
  2. 2.
    Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
  4. 4.
    Le, Q.: Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8595–8598, May 2013Google Scholar
  5. 5.
    Lee, T.S., Mumford, D.: Hierarchical bayesian inference in the visual cortex (2002)Google Scholar
  6. 6.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)CrossRefGoogle Scholar
  7. 7.
    Pearl, J.: Bayesian networks: a model of self-activated memory for evidential reasoning. In: Cognitive Science Society, pp. 329–334 (1985)Google Scholar
  8. 8.
    Ranzato, M., Poultney, C.S., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS, pp. 1137–1144. MIT Press (2006)Google Scholar
  9. 9.
    Shon, A.P., Rao, R.P.: Implementing belief propagation in neural circuits. Neurocomputing 65–66, 393–399 (2005). Computational Neuroscience: Trends in Research 2005Google Scholar
  10. 10.
    Yedidia, J.S., Freeman, W., Weiss, Y.: Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. Inf. Theory 51(7), 2282–2312 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks, pp. 153–160 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Graduate School of Information Science and TechnologyThe University of TokyoTokyoJapan

Personalised recommendations