Advertisement

Extraction of Binary Features by Probabilistic Neural Networks

  • Jiří Grim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5164)

Abstract

In order to design probabilistic neural networks in the framework of pattern recognition we estimate class-conditional probability distributions in the form of finite mixtures of product components. As the mixture components correspond to neurons we specify the properties of neurons in terms of component parameters. The probabilistic features defined by neuron outputs can be used to transform the classification problem without information loss and, simultaneously, the Shannon entropy of the feature space is minimized. We show that, instead of dimensionality reduction, the decision problem can be simplified by using binary approximation of the probabilistic features. In experiments the resulting binary features improve recognition accuracy but also they are nearly independent - in accordance with the minimum entropy property.

Keywords

Probabilistic neural networks Feature extraction Multivariate Bernoulli mixtures Subspace approach Recognition of numerals 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Grim, J.: On numerical evaluation of maximum-likelihood estimates for finite mixtures of distributions. Kybernetika 18(3), 173–190 (1982)zbMATHMathSciNetGoogle Scholar
  2. 2.
    Grim, J.: Multivariate statistical pattern recognition with non-reduced dimensionality. Kybernetika 22(6), 142–157 (1986)zbMATHMathSciNetGoogle Scholar
  3. 3.
    Grim, J.: Maximum-likelihood design of layered neural networks. In: International Conference on Pattern Recognition (Proceedings), pp. 85–89. IEEE Computer Society Press, Los Alamitos (1996)CrossRefGoogle Scholar
  4. 4.
    Grim, J.: Design of multilayer neural networks by information preserving transforms. In: Pessa, E., Penna, M.P., Montesanto, A. (eds.) Third European Congress on Systems Science, pp. 977–982. Edizioni Kappa, Roma (1996)Google Scholar
  5. 5.
    Grim, J.: Discretization of probabilistic neural networks with bounded information loss. In: Rojicek, J., et al. (eds.) Computer-Intensive Methods in Control and Data Processing (Preprints of the 3rd European IEEE Workshop CMP 1998), September 7 – 9, 1998, pp. 205–210. UTIA AV CR, Prague (1998)Google Scholar
  6. 6.
    Grim, J.: Information approach to structural optimization of probabilistic neural networks. In: Ferrer, L., Caselles, A. (eds.) Fourth European Congress on Systems Science, SESGE, Valencia, pp. 527–539 (1999)Google Scholar
  7. 7.
    Grim, J.: Self-organizing maps and probabilistic neural networks. Neural Network World 10(3), 407–415 (2000)Google Scholar
  8. 8.
    Grim, J., Kittler, J., Pudil, P., Somol, P.: Multiple classifier fusion in probabilistic neural networks. Pattern Analysis & Applications 5(7), 221–233 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Grim, J., Pudil, P.: On virtually binary nature of probabilistic neural networks. In: Amin, A., Dori, D., Pudil, P., Freeman, H. (eds.) Advances in Pattern Recognition. LNCS, vol. 1451, pp. 765–774. Springer, Berlin (1998)CrossRefGoogle Scholar
  10. 10.
    Grim, J., Pudil, P., Somol, P.: Recognition of handwritten numerals by structural probabilistic neural networks. In: Bothe, H., Rojas, R. (eds.) Proceedings of the Second ICSC Symposium on Neural Computation, pp. 528–534. ICSC Wetaskiwin (2000)Google Scholar
  11. 11.
    Grim, J., Hora, J.: Recurrent Bayesian Reasoning in Probabilistic Neural Networks. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4669, pp. 129–138. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  12. 12.
    Grother, P.J.: NIST special database 19: handprinted forms and characters database, Technical Report and CD ROM (1995)Google Scholar
  13. 13.
    Haykin, S.: Neural Networks: a comprehensive foundation. Morgan Kaufman, San Mateo (1993)Google Scholar
  14. 14.
    Kohonen, T.: The Self-Organizing Maps. Springer, New York (1997)Google Scholar
  15. 15.
    McLachlan, G.J., Peel, D.: Finite Mixture Models. John Wiley and Sons, New York (2000)zbMATHGoogle Scholar
  16. 16.
    Palm, H.Ch.: A new method for generating statistical classifiers assuming linear mixtures of Gaussian densities. In: Proc. of the 12th IAPR Int. Conf. on Pattern Recognition, Jerusalem, 1994, vol. II, pp. 483–486. IEEE Computer Soc. Press, Los Alamitos (1994)Google Scholar
  17. 17.
    Sammon, J.W.: A Nonlinear Mapping for Data Structure Analysis. IEEE Transactions on Computers 18(5), 401–409 (1969)CrossRefGoogle Scholar
  18. 18.
    Schlesinger, M.I.: Relation between learning and self-learning in pattern recognition. Kibernetika (Kiev) 6, 81–88 (1968) (in Russian)Google Scholar
  19. 19.
    Specht, D.F.: Probabilistic neural networks for classification, mapping or associative memory. In: Proc. of the IEEE International Conference on Neural Networks, I, pp. 525–532 (1988)Google Scholar
  20. 20.
    Streit, L.R., Luginbuhl, T.E.: Maximum-likelihood training of probabilistic neural networks. IEEE Trans. on Neural Networks 5, 764–783 (1994)CrossRefGoogle Scholar
  21. 21.
    Vajda, I., Grim, J.: About the maximum information and maximum likelihood principles in neural networks. Kybernetika 34, 485–494 (1998)Google Scholar
  22. 22.
    Watanabe, S., Fukumizu, K.: Probabilistic design of layered neural networks based on their unified framework. IEEE Trans. on Neural Networks 6(3), 691–702 (1995)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jiří Grim
    • 1
  1. 1.Institute of Information Theory and Automation of the ASCRPRAGUE 8Czech Republic

Personalised recommendations