Advertisement

Boolean Factor Analysis by Expectation-Maximization Method

  • Alexander A. Frolov
  • Dušan Húsek
  • Pavel Yu. Polyakov
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 179)

Abstract

Boolean factor analysis is one of the most efficient methods to reveal and to overcome informational redundancy of high-dimensional binary signals. In the present study, we introduce new Expectation-Maximization method which maximizes the likelihood of Boolean factor analysis solution. Using the so-called bars problem benchmark, we compare efficiencies of the proposed method with Dendritic Inhibition neural network.

Keywords

Factor Score Information Gain Binary Signal Distorted Version Mixed Factor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This paper has been partly elaborated in the framework of the IT4Innovations Centre of Excellence project, reg. no. CZ.1.05/1.1.00/02.0070, supported by Operational Programme ’Research and Development for Innovations’ funded by Structural Funds of the European Union and state budget of the Czech Republic and partly supported by the projects AV0Z10300504, GACR P202/10/0262, 205/09/1079.

References

  1. 1.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc. Ser. B (Methodol.) 39(1), 1–38 (1977)Google Scholar
  2. 2.
    Foldiak, P.: Forming sparse representations by local anti-hebbian learning. Biol. Cybern. 64, 165–170 (1990)CrossRefGoogle Scholar
  3. 3.
    Frolov, A.A., Husek, D., Polyakov, P.Y.: New measure of boolean factor analysis quality. Adapt. Nat. Comput. Algorithms 6593, 100–109 (2011a)Google Scholar
  4. 4.
    Frolov, A.A., Husek, D., Polyakov, P.Y.: Bulevskij faktornyj analiz na osnove attraktornoj nejronnoj seti i nekotoryje ego prilozenija (Boolean factor analysis by means of attractor neural network and some its applications). Neirokomputery: Razrabotka, Primenenie (in Russian ISSN 1999–8554) 1, 25–46 (2011b)Google Scholar
  5. 5.
    Frolov, A.A., Husek, D., Polyakov, P.Y.: Expectation-maximization approach to boolean factor analysis. In: Proceedings of International Joint Conference on Neural Network. San Jose, California (July 2011c)Google Scholar
  6. 6.
    Neal, R.M., Hinton, G.E.: A view of the EM algorithm that justifies incremental, sparse, and other variants. Learn. graph. model. 89, 355–368 (1998)CrossRefGoogle Scholar
  7. 7.
    Spratling, M.W.: Learning image components for object recognition. J. Mach. Learn. Res. 7, 793–815 (2006)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Spratling, M.W., Johnson, M.H.: Preintegration lateral inhibition enhances unsupervised learning. Neural Comput. 14(9), 2157–2179 (2002)zbMATHCrossRefGoogle Scholar
  9. 9.
    Spratling, M.W., Johnson, M.H.: Exploring the functional significance of dendritic inhibition in cortical pyramidal cells. Neurocomputing 52(54), 389–395 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Alexander A. Frolov
    • 1
  • Dušan Húsek
    • 2
  • Pavel Yu. Polyakov
    • 3
  1. 1.Institute of Higher Nervous Activity and NeurophysiologyRussian Academy of SciencesMoscowRussia
  2. 2.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPragueCzech Republic
  3. 3.VSB Technical University of OstravaOstravaCzech Republic

Personalised recommendations