Skip to main content
Log in

Estimation of the Smoothing Parameter in Probabilistic Neural Network Using Evolutionary Algorithms

  • Research Article - Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

The probabilistic neural network (PNN) is an efficient approach that can compute nonlinear decision boundaries, widely used for classification. In this paper, the often used Gaussian distribution function is replaced by a new probability density function which provides a new variant of the PNN method. Most of the higher-dimensional data are statistically found to be not from the normal distribution, and hence, we have replaced it by the symmetric Laplace distribution. Further, the estimation of the smoothing parameter in the proposed PNN model is carried out with three different evolutionary algorithms, namely bat algorithm (BA), grey wolf optimizer (GWO), and whale optimization algorithm (WOA) with a novel fitness function. These different proposed PNN models with variable smoothing parameter estimation methods are tested on five different benchmark data sets. The performance of proposed three Laplace distribution-based variants of PNN incorporated with BA, GWO, and WOA are reported and compared with Gaussian-based variants of PNN and also other commonly used classifiers: the conventional PNN, extreme learning machine, and K-nearest neighbor in terms of measurement accuracy. The results demonstrate that the proposed approaches using evolutionary algorithms can provide as much as a ten percent increase in accuracy over the conventional PNN method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Specht, D.F.: Probabilistic neural networks. Neural Netw. 3(1), 109–118 (1990)

    Article  Google Scholar 

  2. Parzen, E.: On estimation of a probability density function and mode. Ann. Math. Stat. 33(3), 1065–1076 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  3. Kotz, S.; Kozubowski, T.; Podgorski, K.: The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance. Springer, Berlin (2012)

    MATH  Google Scholar 

  4. Draper, N.R.; Smith, H.: Applied Regression Analysis, vol. 326. Wiley, New York (2014)

    MATH  Google Scholar 

  5. Song, W.; Yao, W.; Xing, Y.: Robust mixture regression model fitting by Laplace distribution. Comput. Stat. Data Anal. 71, 128–137 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  6. Huda, A.R.; Nadia, J.A.; Al-Baldawi, T.H.K.: Some Bayes’ estimators for laplace distribution under different loss functions. J. Babylon Univ. 22, 975–983 (2014)

    Google Scholar 

  7. Specht, D.F.: Probabilistic neural networks and the polynomial adaline as complementary techniques for classification. IEEE Trans. Neural Netw. 1(1), 111–121 (1990)

    Article  Google Scholar 

  8. Chtioui, Y.; Panigrahi, S.; Francl, L.: A generalized regression neural network and its application for leaf wetness prediction to forecast plant disease. Chemom. Intell. Lab. Syst. 48(1), 47–58 (1999)

    Article  Google Scholar 

  9. Chtioui, Y.; Panigrahi, S.; Marsh, R. A.: Conjugate gradient and approximate Newton methods for an optimal probablilistic neural network for food color classification. Opt. Eng. 37, 1–19 (1998)

  10. Georgiou, V.L.; Pavlidis, N.G.; Parsopoulos, K.E.; Alevizos, P.D.; Vrahatis, M.N.: New self-adaptive probabilistic neural networks in bioinformatic and medical tasks. Int. J. Artif. Intell. Tools 15(03), 371–396 (2006)

    Article  Google Scholar 

  11. Kusy, M.; Zajdel, R.: Application of reinforcement learning algorithms for the adaptive computation of the smoothing parameter for probabilistic neural network. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 2163–2175 (2014)

    Article  MathSciNet  Google Scholar 

  12. Montana, D.: A weighted probabilistic neural network. In: Mozer, M.C., Jordan, M.I., Petsche, T. (eds.) Advances in Neural Information Processing Systems, pp. 1110–1117. MIT Press, Cambridge (1992)

    Google Scholar 

  13. Georgiou, V.L.; Malefaki, S.N.; Alevizos, Ph.D.; Vrahatis, M.N.: Evolutionary Bayesian probabilistic neural networks. In: International Conference on Numerical Analysis and Applied Mathematics (ICNAAM 2006), pp. 393–396 (2006)

  14. Wei, X.; Diao, M.; Hu, Z.; Hu, X.; Gao, Y.; Huang, R.: QoE prediction for IPTV based on imbalanced dataset by the PNN-PSO algorithm. In: 2018 14th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 112–117. IEEE (2018)

  15. Mao, K.Z.; Tan, K.-C.; Ser, W.: Probabilistic neural-network structure determination for pattern classification. IEEE Trans. Neural Netw. 11(4), 1009–1016 (2000)

    Article  Google Scholar 

  16. Song, T.; Jamshidi, M.; Lee, R.R.; Huang, M.: A novel weighted probabilistic neural network for MR image segmentation. In: 2005 IEEE International Conference on Systems, Man and Cybernetics, vol. 3, pp. 2501–2506. IEEE (2005)

  17. Ramakrishnan, S.; El Emary, I.M.M.: Comparative study between traditional and modified probabilistic neural networks. Telecommun. Syst. 40(1–2), 67–74 (2009)

    Article  Google Scholar 

  18. Kusy, M.; Kowalski, P.A.: Weighted probabilistic neural network. Inf. Sci. 430, 65–76 (2018)

    Article  MathSciNet  Google Scholar 

  19. Yang, X.-S.: A new metaheuristic bat-inspired algorithm. In: Cruz, C., González, J.R., Krasnogor, N., Pelta, D.A., Terrazas, G. (eds.) Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74. Springer, Berlin (2010)

    Chapter  Google Scholar 

  20. Mirjalili, S.; Mirjalili, S.M.; Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)

    Article  Google Scholar 

  21. Mirjalili, S.; Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)

    Article  Google Scholar 

  22. Sokolova, M.; Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 45(4), 427–437 (2009)

    Article  Google Scholar 

  23. Youden, W.J.: Index for rating diagnostic tests. Cancer 3(1), 32–35 (1950)

    Article  Google Scholar 

  24. Dua, D.; Graff, C.: UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA (2019). http://archive.ics.uci.edu/ml

  25. McDermott, J.; Forsyth, R.S.: Diagnosing a disorder in a classification benchmark. Pattern Recognit. Lett. 73, 41–43 (2016)

    Article  Google Scholar 

  26. Jain, A.; Nandakumar, K.; Ross, A.: Score normalization in multimodal biometric systems. Pattern Recognit. 38(12), 2270–2285 (2005)

    Article  Google Scholar 

  27. Kohavi, R et al.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI, vol. 14, pp. 1137–1145. Montreal, Canada (1995)

  28. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, 2004, vol. 2, pp. 985–990. IEEE (2004)

  29. Nanjundappan, D.; et al.: Hybrid weighted probabilistic neural network and biogeography based optimization for dynamic economic dispatch of integrated multiple-fuel and wind power plants. Int. J. Electr. Power Energy Syst. 77, 385–394 (2016)

    Article  Google Scholar 

  30. Royston, J.P.: Some techniques for assessing multivarate normality based on the Shapiro-Wilk w. Appl. Stat. 32, 121–133 (1983)

    Article  MATH  Google Scholar 

  31. Henze, N.; Zirkler, B.: A class of invariant consistent tests for multivariate normality. Commun. Stat. Theory Methods 19(10), 3595–3617 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  32. Kundu, D.: Discriminating between normal and Laplace distributions. In: Advances in Ranking and Selection, Multiplecomparisons, and Reliability, pp. 65–79. Birkhäuser, Boston (2005)

  33. Friedman, M.: A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 11(1), 86–92 (1940)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shraddha M. Naik.

Ethics declarations

Conflict of interest

Authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naik, S.M., Jagannath, R.P.K. & Kuppili, V. Estimation of the Smoothing Parameter in Probabilistic Neural Network Using Evolutionary Algorithms. Arab J Sci Eng 45, 2945–2955 (2020). https://doi.org/10.1007/s13369-019-04227-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-019-04227-5

Keywords

Navigation