Advertisement

Hierarchical Representation Using NMF

  • Hyun Ah Song
  • Soo-Young Lee
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8226)

Abstract

In this paper, we propose a representation model that demonstrates hierarchical feature learning using nsNMF. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the feature development process. It is able to represent the underlying structure of feature hierarchies present in complex data in intuitively understandable manner. Experiments with document data successfully discovered feature hierarchies of concepts in data. We also observed that proposed method results in much better classification and reconstruction performance, especially for small number of features.

Keywords

Hierarchical representation NMF unsupervised feature learning multi-layer deep learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ahn, J.-H., Choi, S., Oh, J.-H.: A multiplicative up-propagation algorithm. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 3. ACM (2004)Google Scholar
  2. 2.
    Bengio, Y.: Learning deep architectures for ai. Foundations and Trends in Machine Learning 2(1), 1–127 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems 19, 153 (2007)Google Scholar
  4. 4.
    Cichocki, A., Zdunek, R.: Multilayer nonnegative matrix factorisation. Electronics Letters 42(16), 947–948 (2006)CrossRefGoogle Scholar
  5. 5.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18(7), 1527–1554 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology 160(1), 106 (1962)Google Scholar
  7. 7.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)CrossRefGoogle Scholar
  8. 8.
    Marcrquote Aurelio Ranzato, C.P., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. Advances in neural information processing systems 19, 1137–1144 (2007)Google Scholar
  9. 9.
    Pascual-Montano, A., Carazo, J.M., Kochi, K., Lehmann, D., Pascual-Marqui, R.D.: Nonsmooth nonnegative matrix factorization (nsnmf). IEEE Transactions on Pattern Analysis and Machine Intelligence 28(3), 403–415 (2006)CrossRefGoogle Scholar
  10. 10.
    Rebhan, S., Eggert, J.P., Gross, H.-M., Körner, E.: Sparse and transformation-invariant hierarchical NMF. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 894–903. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  11. 11.
    Song, H.A., Lee, S.Y.: Hierarchical data representation model-multi-layer nmf. arXiv preprint arXiv:1301.6316 (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Hyun Ah Song
    • 1
  • Soo-Young Lee
    • 1
    • 2
  1. 1.Department of Electrical EngineeringKAISTDaejeonRepublic of Korea
  2. 2.Department of Bio and Brain EngineeringKAISTDaejeonRepublic of Korea

Personalised recommendations