Neural Processing Letters

, Volume 14, Issue 1, pp 49–60 | Cite as

Ensemble of Independent Factor Analyzers with Application to Natural Image Analysis

  • Akio Utsugi
Article

Abstract

In this paper the ensemble of independent factor analyzers (EIFA) is proposed. This new statistical model assumes that each data point is generated by the sum of outputs of independently activated factor analyzers. A maximum likelihood (ML) estimation algorithm for the parameter is derived using a Monte Carlo EM algorithm with a Gibbs sampler. The EIFA model is applied to natural image data. With the progress of the learning, the independent factor analyzers develop into feature detectors that resemble complex cells in mammalian visual systems. Although this result is similar to the previous one obtained by independent subspace analysis, we observe the emergence of complex cells from natural images in a more general framework of models, including overcomplete models allowing additive noise in the observables.

complex cell factor analysis Gabor function independent component analysis invariant-feature detection sparse coding 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lawley, D. N., and Maxwell, A. E.: Factor Analysis as a Statistical Method. Butterworths, London, 1963.Google Scholar
  2. 2.
    Rubin, D. B., and Thayer, D. T.: EM algorithms for ML factor analysis, Psychometrika, 47 (1982), 69–76.Google Scholar
  3. 3.
    Roweis, S.: EM algorithms for PCA and SPCA. In: M. I. Jordan, M. J. Kearns, and S. A. Solla (eds), Advances in Neural Information Processing Systems 10, MIT Press, Cambridge (1998), pp. 626–632.Google Scholar
  4. 4.
    Tipping, M. E., and Bishop, C. M.: Mixtures of probabilistic principal component analyzers. Neural Computation, 11 (1999), 443–482.Google Scholar
  5. 5.
    Bell, A. J., and Sejnowski, T. J.: An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7 (1995), 1129–1159.Google Scholar
  6. 6.
    Lewicki, M. S., and Sejnowski, T. J.: Learning overcomplete representations. Neural Computation, 12 (2000), 337–365.Google Scholar
  7. 7.
    Attias, H.: Independent factor analysis. Neural Computation, 11 (1999), 803–852.Google Scholar
  8. 8.
    Olshausen, B. A., and Field, D. J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381 (1996), 607–609.Google Scholar
  9. 9.
    Bell, A. J., and Sejnowski, T. J.: The ‘independent components’ of natural scenes are edge filters. Vision Research, 37 (1997), 3327–3338.Google Scholar
  10. 10.
    Lewicki, M. S., and Olshausen, B. A.: Probabilistic framework for the adaptation and comparison of image codes. J. Opt. Soc. of Am. A, 16 (1999), 1587–1607.Google Scholar
  11. 11.
    Olshausen, B. A., and Millman, K. J.: Learning sparse codes with a mixture-of-gaussians prior. In: S. A. Solla, T. K. Leen, and K.-R. Müller (eds), Advances in Neural Information Processing Systems 12, MIT Press, Cambridge (2000), pp. 841–847.Google Scholar
  12. 12.
    Hinton, G. E., Dayan, P., and Revow, M.: Modeling the manifolds of images of handwritten digits. IEEE Transactions on Neural Networks, 8 (1997), 65–74.Google Scholar
  13. 13.
    Ghahramani, Z., and Hinton, G. E.: The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR–96–1, University of Toronto, Dept. of Computer Science, 1997.Google Scholar
  14. 14.
    Utsugi, A., and Kumagai, T.: Bayesian analysis of mixtures of factor analyzers. Neural Computation, 13 (2001), in press.Google Scholar
  15. 15.
    Hyvärinen, A., and Hoyer, P.: Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12 (2000), 1705–1720.Google Scholar
  16. 16.
    Hyvärinen, A., and Hoyer, P.: Emergence of topography and complex cell properties from natural images using extensions of ICA. In: S. A. Solla, T. K. Leen, and K.-R. Müller (eds), Advances in Neural Information Processing Systems 12, MIT Press, Cambridge (2000), pp. 827–833.Google Scholar
  17. 17.
    Cardoso, J.-F.: Multidimensional independent component analysi. In: Proc. IEEE int. Conf. on Acoustics, Speech and Signal Processing (ICASSP'98), Seattle, 1998.Google Scholar
  18. 18.
    Tanner, M. A.: Tools for Statistical Inference. 3rd Edn., Springer-Verlag, New York, 1996.Google Scholar
  19. 19.
    Clyde, M., Parmigiani, G., and Vidakovic, B.: Multiple shrinkage and subset selection in wavelets. Biometrika, 85 (1998), 391–402.Google Scholar
  20. 20.
    Kohonen, T.: Self-Organizing Maps. Springer, Berlin, 1995.Google Scholar
  21. 21.
    Ghahramani, Z., and Jordan, M. I.: Factorial hidden markov models. Machine Learning, 29 (1997), 245–273.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Akio Utsugi
    • 1
  1. 1.National Institute of Advanced Industrial Science and TechnologyTsukuba IbarakiJapan

Personalised recommendations