Pattern Classification with the Probabilistic Neural Networks Based on Orthogonal Series Kernel

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9719)


Probabilistic neural network (PNN) is the well-known instance-based learning algorithm, which is widely used in various pattern classification and regression tasks, if rather small number of instances for each class is available. The known disadvantage of this network is its insufficient classification computational complexity. The common way to overcome this drawback is the reduction techniques with selection of the most typical instances. Such approach causes the shifting of the estimates of the class probability distribution, and, in turn, the decrease of the classification accuracy. In this paper we examine another possible solution by replacing the Gaussian window and the Parzen kernel to the orthogonal series Fejér kernel and using the naïve assumption about independence of features. It is shown, that our approach makes it possible to achieve much better runtime complexity in comparison with either original PNN or its modification with the preliminary clustering of the training set.


Pattern classification Small sample size problem Probabilistic neural network (PNN) Orthogonal series kernel Nonparametric density estimates 



The work is partially supported by Laboratory of Algorithms and Technologies for Network Analysis, National Research University Higher School of Economics.


  1. 1.
    Haykin, S.O.: Neural Networks and Learning Machines. Prentice Hall, Harlow (2008)Google Scholar
  2. 2.
    Deng, W., Liu, Y., Hu, J., Guo, J.: The small sample size problem of ICA: a comparative study and analysis. Pattern Recogn. 45, 4438–4450 (2012)CrossRefMATHGoogle Scholar
  3. 3.
    Savchenko, A.V., Belova, N.S.: Statistical testing of segment homogeneity in classification of piecewise–regular objects. Int. J. Appl. Math. Comput. Sci. 25, 915–925 (2015)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Wilson, D.R., Martinez, T.R.: Reduction techniques for instance-based learning algorithms. Mach. Learn. 38, 257–286 (2000)CrossRefMATHGoogle Scholar
  5. 5.
    Theodoridis, S., Koutroumbas, K.: Pattern Recognition. Academic Press, Burlington (2008)MATHGoogle Scholar
  6. 6.
    Specht, D.F.: Probabilistic neural networks. Neural Netw. 3, 109–118 (1990)CrossRefGoogle Scholar
  7. 7.
    Rutkowski, L.: Adaptive probabilistic neural networks for pattern classification in time-varying environment. IEEE Trans. Neural Netw. 15, 811–827 (2004)CrossRefGoogle Scholar
  8. 8.
    Romero, R.D., Touretzky, D.S., Thibadeau, R.H.: Optical Chinese character recognition using probabilistic neural networks. Pattern Recogn. 30, 1279–1292 (1997)CrossRefGoogle Scholar
  9. 9.
    Savchenko, A.V.: Probabilistic neural network with homogeneity testing in recognition of discrete patterns set. Neural Netw. 46, 227–241 (2013)CrossRefMATHGoogle Scholar
  10. 10.
    Li, X., Zhang, S., Li, S., Chen, J.: An improved method of speech recognition based on probabilistic neural network ensembles. In: 11th International Conference on Natural Computation (ICNC), pp. 650–654. IEEE (2015)Google Scholar
  11. 11.
    Hirschauer, T.J., Adeli, H., Buford, J.A.: Computer-aided diagnosis of Parkinson’s disease using enhanced probabilistic neural network. J. Med. Syst. 39, 1–12 (2015)CrossRefGoogle Scholar
  12. 12.
    Savchenko, A.V.: Statistical recognition of a set of patterns using novel probability neural network. In: Mana, N., Schwenker, F., Trentin, E. (eds.) Artificial Neural Networks in Pattern Recognition (ANNPR 2012), LNCS, vol. 7477, pp. 93–103. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  13. 13.
    Amiri, A., Niaki, S.T.A., Moghadam, A.T.: A probabilistic artificial neural network-based procedure for variance change point estimation. Soft Comput. 19, 691–700 (2015)CrossRefGoogle Scholar
  14. 14.
    Kusy, M., Kluska, J.: Probabilistic neural network structure reduction for medical data classification. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013, Part I. LNCS, vol. 7894, pp. 118–129. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  15. 15.
    Devroye, L., Gyorfi, L.: Nonparametric Density Estimation: The L1 View. Wiley, Hoboken (1985)MATHGoogle Scholar
  16. 16.
    Greblicki, W., Pawlak, M.: Classification using the Fourier series estimate of multivariate density functions. IEEE Trans. Syst. Man Cybern. 11, 726–730 (1981)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Schwartz, S.C.: Estimation of probability density by an orthogonal series. Ann. Math. Stat. 38, 1261–1265 (1967)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Greblicki, W.: Asymptotic efficiency of classifying procedures using the Hermite series estimate of multivariate probability densities. IEEE Tran. Inf. Theory 27, 364–366 (1981)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Bhatt, R.B., Sharma, G., Dhall, A., Chaudhury, S.: Efficient skin region segmentation using low complexity fuzzy decision tree model. In: India Conference (INDICON), pp. 1–4. IEEE (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Laboratory of Algorithms and Technologies for Network AnalysisNational Research University Higher School of EconomicsNizhny NovgorodRussia

Personalised recommendations